18454 lines
1.2 MiB
18454 lines
1.2 MiB
<!DOCTYPE HTML>
|
|
<html lang="en">
|
|
<head>
|
|
<!-- Generated by javadoc (17) on Wed Jul 02 13:16:04 UTC 2025 -->
|
|
<title>Calib3d (OpenCV 4.12.0 Java documentation)</title>
|
|
<meta name="viewport" content="width=device-width, initial-scale=1">
|
|
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
|
|
<meta name="dc.created" content="2025-07-02">
|
|
<meta name="description" content="declaration: package: org.opencv.calib3d, class: Calib3d">
|
|
<meta name="generator" content="javadoc/ClassWriterImpl">
|
|
<link rel="stylesheet" type="text/css" href="../../../stylesheet.css" title="Style">
|
|
<link rel="stylesheet" type="text/css" href="../../../script-dir/jquery-ui.min.css" title="Style">
|
|
<link rel="stylesheet" type="text/css" href="../../../jquery-ui.overrides.css" title="Style">
|
|
<script type="text/javascript" src="../../../script.js"></script>
|
|
<script type="text/javascript" src="../../../script-dir/jquery-3.7.1.min.js"></script>
|
|
<script type="text/javascript" src="../../../script-dir/jquery-ui.min.js"></script>
|
|
</head>
|
|
<body class="class-declaration-page">
|
|
<script type="text/javascript">var evenRowColor = "even-row-color";
|
|
var oddRowColor = "odd-row-color";
|
|
var tableTab = "table-tab";
|
|
var activeTableTab = "active-table-tab";
|
|
var pathtoroot = "../../../";
|
|
loadScripts(document, 'script');</script>
|
|
<noscript>
|
|
<div>JavaScript is disabled on your browser.</div>
|
|
</noscript>
|
|
<div class="flex-box">
|
|
<header role="banner" class="flex-header">
|
|
<nav role="navigation">
|
|
<!-- ========= START OF TOP NAVBAR ======= -->
|
|
<div class="top-nav" id="navbar-top">
|
|
<div class="skip-nav"><a href="#skip-navbar-top" title="Skip navigation links">Skip navigation links</a></div>
|
|
<div class="about-language">
|
|
<script>
|
|
var url = window.location.href;
|
|
var pos = url.lastIndexOf('/javadoc/');
|
|
url = pos >= 0 ? (url.substring(0, pos) + '/javadoc/mymath.js') : (window.location.origin + '/mymath.js');
|
|
var script = document.createElement('script');
|
|
script.src = 'https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.0/MathJax.js?config=TeX-AMS-MML_HTMLorMML,' + url;
|
|
document.getElementsByTagName('head')[0].appendChild(script);
|
|
</script>
|
|
</div>
|
|
<ul id="navbar-top-firstrow" class="nav-list" title="Navigation">
|
|
<li><a href="../../../index.html">Overview</a></li>
|
|
<li><a href="package-summary.html">Package</a></li>
|
|
<li class="nav-bar-cell1-rev">Class</li>
|
|
<li><a href="package-tree.html">Tree</a></li>
|
|
<li><a href="../../../index-all.html">Index</a></li>
|
|
<li><a href="../../../help-doc.html#class">Help</a></li>
|
|
</ul>
|
|
</div>
|
|
<div class="sub-nav">
|
|
<div>
|
|
<ul class="sub-nav-list">
|
|
<li>Summary: </li>
|
|
<li>Nested | </li>
|
|
<li><a href="#field-summary">Field</a> | </li>
|
|
<li><a href="#constructor-summary">Constr</a> | </li>
|
|
<li><a href="#method-summary">Method</a></li>
|
|
</ul>
|
|
<ul class="sub-nav-list">
|
|
<li>Detail: </li>
|
|
<li><a href="#field-detail">Field</a> | </li>
|
|
<li><a href="#constructor-detail">Constr</a> | </li>
|
|
<li><a href="#method-detail">Method</a></li>
|
|
</ul>
|
|
</div>
|
|
<div class="nav-list-search"><label for="search-input">SEARCH:</label>
|
|
<input type="text" id="search-input" value="search" disabled="disabled">
|
|
<input type="reset" id="reset-button" value="reset" disabled="disabled">
|
|
</div>
|
|
</div>
|
|
<!-- ========= END OF TOP NAVBAR ========= -->
|
|
<span class="skip-nav" id="skip-navbar-top"></span></nav>
|
|
</header>
|
|
<div class="flex-content">
|
|
<main role="main">
|
|
<!-- ======== START OF CLASS DATA ======== -->
|
|
<div class="header">
|
|
<div class="sub-title"><span class="package-label-in-type">Package</span> <a href="package-summary.html">org.opencv.calib3d</a></div>
|
|
<h1 title="Class Calib3d" class="title">Class Calib3d</h1>
|
|
</div>
|
|
<div class="inheritance" title="Inheritance Tree"><a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/lang/Object.html" title="class or interface in java.lang" class="external-link">java.lang.Object</a>
|
|
<div class="inheritance">org.opencv.calib3d.Calib3d</div>
|
|
</div>
|
|
<section class="class-description" id="class-description">
|
|
<hr>
|
|
<div class="type-signature"><span class="modifiers">public class </span><span class="element-name type-name-label">Calib3d</span>
|
|
<span class="extends-implements">extends <a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/lang/Object.html" title="class or interface in java.lang" class="external-link">Object</a></span></div>
|
|
</section>
|
|
<section class="summary">
|
|
<ul class="summary-list">
|
|
<!-- =========== FIELD SUMMARY =========== -->
|
|
<li>
|
|
<section class="field-summary" id="field-summary">
|
|
<h2>Field Summary</h2>
|
|
<div class="caption"><span>Fields</span></div>
|
|
<div class="summary-table three-column-summary">
|
|
<div class="table-header col-first">Modifier and Type</div>
|
|
<div class="table-header col-second">Field</div>
|
|
<div class="table-header col-last">Description</div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#CALIB_CB_ACCURACY" class="member-name-link">CALIB_CB_ACCURACY</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#CALIB_CB_ADAPTIVE_THRESH" class="member-name-link">CALIB_CB_ADAPTIVE_THRESH</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#CALIB_CB_ASYMMETRIC_GRID" class="member-name-link">CALIB_CB_ASYMMETRIC_GRID</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#CALIB_CB_CLUSTERING" class="member-name-link">CALIB_CB_CLUSTERING</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#CALIB_CB_EXHAUSTIVE" class="member-name-link">CALIB_CB_EXHAUSTIVE</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#CALIB_CB_FAST_CHECK" class="member-name-link">CALIB_CB_FAST_CHECK</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#CALIB_CB_FILTER_QUADS" class="member-name-link">CALIB_CB_FILTER_QUADS</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#CALIB_CB_LARGER" class="member-name-link">CALIB_CB_LARGER</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#CALIB_CB_MARKER" class="member-name-link">CALIB_CB_MARKER</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#CALIB_CB_NORMALIZE_IMAGE" class="member-name-link">CALIB_CB_NORMALIZE_IMAGE</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#CALIB_CB_PLAIN" class="member-name-link">CALIB_CB_PLAIN</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#CALIB_CB_SYMMETRIC_GRID" class="member-name-link">CALIB_CB_SYMMETRIC_GRID</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#CALIB_FIX_ASPECT_RATIO" class="member-name-link">CALIB_FIX_ASPECT_RATIO</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#CALIB_FIX_FOCAL_LENGTH" class="member-name-link">CALIB_FIX_FOCAL_LENGTH</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#CALIB_FIX_INTRINSIC" class="member-name-link">CALIB_FIX_INTRINSIC</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#CALIB_FIX_K1" class="member-name-link">CALIB_FIX_K1</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#CALIB_FIX_K2" class="member-name-link">CALIB_FIX_K2</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#CALIB_FIX_K3" class="member-name-link">CALIB_FIX_K3</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#CALIB_FIX_K4" class="member-name-link">CALIB_FIX_K4</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#CALIB_FIX_K5" class="member-name-link">CALIB_FIX_K5</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#CALIB_FIX_K6" class="member-name-link">CALIB_FIX_K6</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#CALIB_FIX_PRINCIPAL_POINT" class="member-name-link">CALIB_FIX_PRINCIPAL_POINT</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#CALIB_FIX_S1_S2_S3_S4" class="member-name-link">CALIB_FIX_S1_S2_S3_S4</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#CALIB_FIX_TANGENT_DIST" class="member-name-link">CALIB_FIX_TANGENT_DIST</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#CALIB_FIX_TAUX_TAUY" class="member-name-link">CALIB_FIX_TAUX_TAUY</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#CALIB_HAND_EYE_ANDREFF" class="member-name-link">CALIB_HAND_EYE_ANDREFF</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#CALIB_HAND_EYE_DANIILIDIS" class="member-name-link">CALIB_HAND_EYE_DANIILIDIS</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#CALIB_HAND_EYE_HORAUD" class="member-name-link">CALIB_HAND_EYE_HORAUD</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#CALIB_HAND_EYE_PARK" class="member-name-link">CALIB_HAND_EYE_PARK</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#CALIB_HAND_EYE_TSAI" class="member-name-link">CALIB_HAND_EYE_TSAI</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#CALIB_NINTRINSIC" class="member-name-link">CALIB_NINTRINSIC</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#CALIB_RATIONAL_MODEL" class="member-name-link">CALIB_RATIONAL_MODEL</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#CALIB_ROBOT_WORLD_HAND_EYE_LI" class="member-name-link">CALIB_ROBOT_WORLD_HAND_EYE_LI</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#CALIB_ROBOT_WORLD_HAND_EYE_SHAH" class="member-name-link">CALIB_ROBOT_WORLD_HAND_EYE_SHAH</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#CALIB_SAME_FOCAL_LENGTH" class="member-name-link">CALIB_SAME_FOCAL_LENGTH</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#CALIB_THIN_PRISM_MODEL" class="member-name-link">CALIB_THIN_PRISM_MODEL</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#CALIB_TILTED_MODEL" class="member-name-link">CALIB_TILTED_MODEL</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#CALIB_USE_EXTRINSIC_GUESS" class="member-name-link">CALIB_USE_EXTRINSIC_GUESS</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#CALIB_USE_INTRINSIC_GUESS" class="member-name-link">CALIB_USE_INTRINSIC_GUESS</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#CALIB_USE_LU" class="member-name-link">CALIB_USE_LU</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#CALIB_USE_QR" class="member-name-link">CALIB_USE_QR</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#CALIB_ZERO_DISPARITY" class="member-name-link">CALIB_ZERO_DISPARITY</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#CALIB_ZERO_TANGENT_DIST" class="member-name-link">CALIB_ZERO_TANGENT_DIST</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#CirclesGridFinderParameters_ASYMMETRIC_GRID" class="member-name-link">CirclesGridFinderParameters_ASYMMETRIC_GRID</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#CirclesGridFinderParameters_SYMMETRIC_GRID" class="member-name-link">CirclesGridFinderParameters_SYMMETRIC_GRID</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#COV_POLISHER" class="member-name-link">COV_POLISHER</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#CV_DLS" class="member-name-link">CV_DLS</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#CV_EPNP" class="member-name-link">CV_EPNP</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#CV_ITERATIVE" class="member-name-link">CV_ITERATIVE</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#CV_P3P" class="member-name-link">CV_P3P</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#CvLevMarq_CALC_J" class="member-name-link">CvLevMarq_CALC_J</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#CvLevMarq_CHECK_ERR" class="member-name-link">CvLevMarq_CHECK_ERR</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#CvLevMarq_DONE" class="member-name-link">CvLevMarq_DONE</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#CvLevMarq_STARTED" class="member-name-link">CvLevMarq_STARTED</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#fisheye_CALIB_CHECK_COND" class="member-name-link">fisheye_CALIB_CHECK_COND</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#fisheye_CALIB_FIX_FOCAL_LENGTH" class="member-name-link">fisheye_CALIB_FIX_FOCAL_LENGTH</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#fisheye_CALIB_FIX_INTRINSIC" class="member-name-link">fisheye_CALIB_FIX_INTRINSIC</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#fisheye_CALIB_FIX_K1" class="member-name-link">fisheye_CALIB_FIX_K1</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#fisheye_CALIB_FIX_K2" class="member-name-link">fisheye_CALIB_FIX_K2</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#fisheye_CALIB_FIX_K3" class="member-name-link">fisheye_CALIB_FIX_K3</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#fisheye_CALIB_FIX_K4" class="member-name-link">fisheye_CALIB_FIX_K4</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#fisheye_CALIB_FIX_PRINCIPAL_POINT" class="member-name-link">fisheye_CALIB_FIX_PRINCIPAL_POINT</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#fisheye_CALIB_FIX_SKEW" class="member-name-link">fisheye_CALIB_FIX_SKEW</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#fisheye_CALIB_RECOMPUTE_EXTRINSIC" class="member-name-link">fisheye_CALIB_RECOMPUTE_EXTRINSIC</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#fisheye_CALIB_USE_INTRINSIC_GUESS" class="member-name-link">fisheye_CALIB_USE_INTRINSIC_GUESS</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#fisheye_CALIB_ZERO_DISPARITY" class="member-name-link">fisheye_CALIB_ZERO_DISPARITY</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#FM_7POINT" class="member-name-link">FM_7POINT</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#FM_8POINT" class="member-name-link">FM_8POINT</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#FM_LMEDS" class="member-name-link">FM_LMEDS</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#FM_RANSAC" class="member-name-link">FM_RANSAC</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#LMEDS" class="member-name-link">LMEDS</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#LOCAL_OPTIM_GC" class="member-name-link">LOCAL_OPTIM_GC</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#LOCAL_OPTIM_INNER_AND_ITER_LO" class="member-name-link">LOCAL_OPTIM_INNER_AND_ITER_LO</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#LOCAL_OPTIM_INNER_LO" class="member-name-link">LOCAL_OPTIM_INNER_LO</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#LOCAL_OPTIM_NULL" class="member-name-link">LOCAL_OPTIM_NULL</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#LOCAL_OPTIM_SIGMA" class="member-name-link">LOCAL_OPTIM_SIGMA</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#LSQ_POLISHER" class="member-name-link">LSQ_POLISHER</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#MAGSAC" class="member-name-link">MAGSAC</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#NEIGH_FLANN_KNN" class="member-name-link">NEIGH_FLANN_KNN</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#NEIGH_FLANN_RADIUS" class="member-name-link">NEIGH_FLANN_RADIUS</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#NEIGH_GRID" class="member-name-link">NEIGH_GRID</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#NONE_POLISHER" class="member-name-link">NONE_POLISHER</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#PROJ_SPHERICAL_EQRECT" class="member-name-link">PROJ_SPHERICAL_EQRECT</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#PROJ_SPHERICAL_ORTHO" class="member-name-link">PROJ_SPHERICAL_ORTHO</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#RANSAC" class="member-name-link">RANSAC</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#RHO" class="member-name-link">RHO</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#SAMPLING_NAPSAC" class="member-name-link">SAMPLING_NAPSAC</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#SAMPLING_PROGRESSIVE_NAPSAC" class="member-name-link">SAMPLING_PROGRESSIVE_NAPSAC</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#SAMPLING_PROSAC" class="member-name-link">SAMPLING_PROSAC</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#SAMPLING_UNIFORM" class="member-name-link">SAMPLING_UNIFORM</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#SCORE_METHOD_LMEDS" class="member-name-link">SCORE_METHOD_LMEDS</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#SCORE_METHOD_MAGSAC" class="member-name-link">SCORE_METHOD_MAGSAC</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#SCORE_METHOD_MSAC" class="member-name-link">SCORE_METHOD_MSAC</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#SCORE_METHOD_RANSAC" class="member-name-link">SCORE_METHOD_RANSAC</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#SOLVEPNP_AP3P" class="member-name-link">SOLVEPNP_AP3P</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#SOLVEPNP_DLS" class="member-name-link">SOLVEPNP_DLS</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#SOLVEPNP_EPNP" class="member-name-link">SOLVEPNP_EPNP</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#SOLVEPNP_IPPE" class="member-name-link">SOLVEPNP_IPPE</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#SOLVEPNP_IPPE_SQUARE" class="member-name-link">SOLVEPNP_IPPE_SQUARE</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#SOLVEPNP_ITERATIVE" class="member-name-link">SOLVEPNP_ITERATIVE</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#SOLVEPNP_MAX_COUNT" class="member-name-link">SOLVEPNP_MAX_COUNT</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#SOLVEPNP_P3P" class="member-name-link">SOLVEPNP_P3P</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#SOLVEPNP_SQPNP" class="member-name-link">SOLVEPNP_SQPNP</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#SOLVEPNP_UPNP" class="member-name-link">SOLVEPNP_UPNP</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#USAC_ACCURATE" class="member-name-link">USAC_ACCURATE</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#USAC_DEFAULT" class="member-name-link">USAC_DEFAULT</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#USAC_FAST" class="member-name-link">USAC_FAST</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#USAC_FM_8PTS" class="member-name-link">USAC_FM_8PTS</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#USAC_MAGSAC" class="member-name-link">USAC_MAGSAC</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
<div class="col-first odd-row-color"><code>static final int</code></div>
|
|
<div class="col-second odd-row-color"><code><a href="#USAC_PARALLEL" class="member-name-link">USAC_PARALLEL</a></code></div>
|
|
<div class="col-last odd-row-color"> </div>
|
|
<div class="col-first even-row-color"><code>static final int</code></div>
|
|
<div class="col-second even-row-color"><code><a href="#USAC_PROSAC" class="member-name-link">USAC_PROSAC</a></code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
</div>
|
|
</section>
|
|
</li>
|
|
<!-- ======== CONSTRUCTOR SUMMARY ======== -->
|
|
<li>
|
|
<section class="constructor-summary" id="constructor-summary">
|
|
<h2>Constructor Summary</h2>
|
|
<div class="caption"><span>Constructors</span></div>
|
|
<div class="summary-table two-column-summary">
|
|
<div class="table-header col-first">Constructor</div>
|
|
<div class="table-header col-last">Description</div>
|
|
<div class="col-constructor-name even-row-color"><code><a href="#%3Cinit%3E()" class="member-name-link">Calib3d</a>()</code></div>
|
|
<div class="col-last even-row-color"> </div>
|
|
</div>
|
|
</section>
|
|
</li>
|
|
<!-- ========== METHOD SUMMARY =========== -->
|
|
<li>
|
|
<section class="method-summary" id="method-summary">
|
|
<h2>Method Summary</h2>
|
|
<div id="method-summary-table">
|
|
<div class="table-tabs" role="tablist" aria-orientation="horizontal"><button id="method-summary-table-tab0" role="tab" aria-selected="true" aria-controls="method-summary-table.tabpanel" tabindex="0" onkeydown="switchTab(event)" onclick="show('method-summary-table', 'method-summary-table', 3)" class="active-table-tab">All Methods</button><button id="method-summary-table-tab1" role="tab" aria-selected="false" aria-controls="method-summary-table.tabpanel" tabindex="-1" onkeydown="switchTab(event)" onclick="show('method-summary-table', 'method-summary-table-tab1', 3)" class="table-tab">Static Methods</button><button id="method-summary-table-tab4" role="tab" aria-selected="false" aria-controls="method-summary-table.tabpanel" tabindex="-1" onkeydown="switchTab(event)" onclick="show('method-summary-table', 'method-summary-table-tab4', 3)" class="table-tab">Concrete Methods</button></div>
|
|
<div id="method-summary-table.tabpanel" role="tabpanel" aria-labelledby="method-summary-table-tab0">
|
|
<div class="summary-table three-column-summary">
|
|
<div class="table-header col-first">Modifier and Type</div>
|
|
<div class="table-header col-second">Method</div>
|
|
<div class="table-header col-last">Description</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static double</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#calibrateCamera(java.util.List,java.util.List,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List)" class="member-name-link">calibrateCamera</a><wbr>(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static double</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#calibrateCamera(java.util.List,java.util.List,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List,int)" class="member-name-link">calibrateCamera</a><wbr>(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs,
|
|
int flags)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static double</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#calibrateCamera(java.util.List,java.util.List,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List,int,org.opencv.core.TermCriteria)" class="member-name-link">calibrateCamera</a><wbr>(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs,
|
|
int flags,
|
|
<a href="../core/TermCriteria.html" title="class in org.opencv.core">TermCriteria</a> criteria)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static double</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#calibrateCameraExtended(java.util.List,java.util.List,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">calibrateCameraExtended</a><wbr>(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> stdDeviationsIntrinsics,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> stdDeviationsExtrinsics,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> perViewErrors)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Finds the camera intrinsic and extrinsic parameters from several views of a calibration
|
|
pattern.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static double</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#calibrateCameraExtended(java.util.List,java.util.List,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int)" class="member-name-link">calibrateCameraExtended</a><wbr>(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> stdDeviationsIntrinsics,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> stdDeviationsExtrinsics,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> perViewErrors,
|
|
int flags)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Finds the camera intrinsic and extrinsic parameters from several views of a calibration
|
|
pattern.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static double</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#calibrateCameraExtended(java.util.List,java.util.List,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,org.opencv.core.TermCriteria)" class="member-name-link">calibrateCameraExtended</a><wbr>(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> stdDeviationsIntrinsics,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> stdDeviationsExtrinsics,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> perViewErrors,
|
|
int flags,
|
|
<a href="../core/TermCriteria.html" title="class in org.opencv.core">TermCriteria</a> criteria)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Finds the camera intrinsic and extrinsic parameters from several views of a calibration
|
|
pattern.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static double</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#calibrateCameraRO(java.util.List,java.util.List,org.opencv.core.Size,int,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List,org.opencv.core.Mat)" class="member-name-link">calibrateCameraRO</a><wbr>(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
int iFixedPoint,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> newObjPoints)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static double</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#calibrateCameraRO(java.util.List,java.util.List,org.opencv.core.Size,int,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List,org.opencv.core.Mat,int)" class="member-name-link">calibrateCameraRO</a><wbr>(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
int iFixedPoint,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> newObjPoints,
|
|
int flags)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static double</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#calibrateCameraRO(java.util.List,java.util.List,org.opencv.core.Size,int,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List,org.opencv.core.Mat,int,org.opencv.core.TermCriteria)" class="member-name-link">calibrateCameraRO</a><wbr>(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
int iFixedPoint,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> newObjPoints,
|
|
int flags,
|
|
<a href="../core/TermCriteria.html" title="class in org.opencv.core">TermCriteria</a> criteria)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static double</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#calibrateCameraROExtended(java.util.List,java.util.List,org.opencv.core.Size,int,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">calibrateCameraROExtended</a><wbr>(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
int iFixedPoint,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> newObjPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> stdDeviationsIntrinsics,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> stdDeviationsExtrinsics,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> stdDeviationsObjPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> perViewErrors)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Finds the camera intrinsic and extrinsic parameters from several views of a calibration pattern.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static double</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#calibrateCameraROExtended(java.util.List,java.util.List,org.opencv.core.Size,int,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int)" class="member-name-link">calibrateCameraROExtended</a><wbr>(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
int iFixedPoint,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> newObjPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> stdDeviationsIntrinsics,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> stdDeviationsExtrinsics,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> stdDeviationsObjPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> perViewErrors,
|
|
int flags)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Finds the camera intrinsic and extrinsic parameters from several views of a calibration pattern.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static double</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#calibrateCameraROExtended(java.util.List,java.util.List,org.opencv.core.Size,int,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,org.opencv.core.TermCriteria)" class="member-name-link">calibrateCameraROExtended</a><wbr>(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
int iFixedPoint,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> newObjPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> stdDeviationsIntrinsics,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> stdDeviationsExtrinsics,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> stdDeviationsObjPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> perViewErrors,
|
|
int flags,
|
|
<a href="../core/TermCriteria.html" title="class in org.opencv.core">TermCriteria</a> criteria)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Finds the camera intrinsic and extrinsic parameters from several views of a calibration pattern.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#calibrateHandEye(java.util.List,java.util.List,java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">calibrateHandEye</a><wbr>(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> R_gripper2base,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> t_gripper2base,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> R_target2cam,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> t_target2cam,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R_cam2gripper,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> t_cam2gripper)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Computes Hand-Eye calibration: \(_{}^{g}\textrm{T}_c\)</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#calibrateHandEye(java.util.List,java.util.List,java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat,int)" class="member-name-link">calibrateHandEye</a><wbr>(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> R_gripper2base,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> t_gripper2base,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> R_target2cam,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> t_target2cam,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R_cam2gripper,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> t_cam2gripper,
|
|
int method)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Computes Hand-Eye calibration: \(_{}^{g}\textrm{T}_c\)</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#calibrateRobotWorldHandEye(java.util.List,java.util.List,java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">calibrateRobotWorldHandEye</a><wbr>(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> R_world2cam,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> t_world2cam,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> R_base2gripper,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> t_base2gripper,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R_base2world,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> t_base2world,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R_gripper2cam,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> t_gripper2cam)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Computes Robot-World/Hand-Eye calibration: \(_{}^{w}\textrm{T}_b\) and \(_{}^{c}\textrm{T}_g\)</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#calibrateRobotWorldHandEye(java.util.List,java.util.List,java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int)" class="member-name-link">calibrateRobotWorldHandEye</a><wbr>(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> R_world2cam,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> t_world2cam,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> R_base2gripper,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> t_base2gripper,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R_base2world,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> t_base2world,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R_gripper2cam,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> t_gripper2cam,
|
|
int method)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Computes Robot-World/Hand-Eye calibration: \(_{}^{w}\textrm{T}_b\) and \(_{}^{c}\textrm{T}_g\)</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#calibrationMatrixValues(org.opencv.core.Mat,org.opencv.core.Size,double,double,double%5B%5D,double%5B%5D,double%5B%5D,org.opencv.core.Point,double%5B%5D)" class="member-name-link">calibrationMatrixValues</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
double apertureWidth,
|
|
double apertureHeight,
|
|
double[] fovx,
|
|
double[] fovy,
|
|
double[] focalLength,
|
|
<a href="../core/Point.html" title="class in org.opencv.core">Point</a> principalPoint,
|
|
double[] aspectRatio)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Computes useful camera characteristics from the camera intrinsic matrix.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static boolean</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#checkChessboard(org.opencv.core.Mat,org.opencv.core.Size)" class="member-name-link">checkChessboard</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> img,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> size)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#composeRT(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">composeRT</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec3,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec3)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Combines two rotation-and-shift transformations.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#composeRT(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">composeRT</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec3,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec3,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dr3dr1)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Combines two rotation-and-shift transformations.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#composeRT(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">composeRT</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec3,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec3,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dr3dr1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dr3dt1)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Combines two rotation-and-shift transformations.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#composeRT(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">composeRT</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec3,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec3,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dr3dr1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dr3dt1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dr3dr2)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Combines two rotation-and-shift transformations.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#composeRT(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">composeRT</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec3,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec3,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dr3dr1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dr3dt1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dr3dr2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dr3dt2)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Combines two rotation-and-shift transformations.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#composeRT(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">composeRT</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec3,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec3,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dr3dr1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dr3dt1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dr3dr2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dr3dt2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dt3dr1)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Combines two rotation-and-shift transformations.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#composeRT(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">composeRT</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec3,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec3,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dr3dr1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dr3dt1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dr3dr2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dr3dt2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dt3dr1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dt3dt1)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Combines two rotation-and-shift transformations.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#composeRT(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">composeRT</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec3,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec3,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dr3dr1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dr3dt1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dr3dr2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dr3dt2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dt3dr1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dt3dt1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dt3dr2)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Combines two rotation-and-shift transformations.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#composeRT(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">composeRT</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec3,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec3,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dr3dr1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dr3dt1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dr3dr2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dr3dt2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dt3dr1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dt3dt1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dt3dr2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dt3dt2)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Combines two rotation-and-shift transformations.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#computeCorrespondEpilines(org.opencv.core.Mat,int,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">computeCorrespondEpilines</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points,
|
|
int whichImage,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> F,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> lines)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">For points in an image of a stereo pair, computes the corresponding epilines in the other image.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#convertPointsFromHomogeneous(org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">convertPointsFromHomogeneous</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> src,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dst)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Converts points from homogeneous to Euclidean space.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#convertPointsToHomogeneous(org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">convertPointsToHomogeneous</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> src,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dst)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Converts points from Euclidean to homogeneous space.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#correctMatches(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">correctMatches</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> F,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> newPoints1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> newPoints2)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Refines coordinates of corresponding points.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#decomposeEssentialMat(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">decomposeEssentialMat</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> E,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> t)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Decompose an essential matrix to possible rotations and translation.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static int</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#decomposeHomographyMat(org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List,java.util.List)" class="member-name-link">decomposeHomographyMat</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> H,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rotations,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> translations,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> normals)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Decompose a homography matrix to rotation(s), translation(s) and plane normal(s).</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#decomposeProjectionMatrix(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">decomposeProjectionMatrix</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> projMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rotMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> transVect)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Decomposes a projection matrix into a rotation matrix and a camera intrinsic matrix.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#decomposeProjectionMatrix(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">decomposeProjectionMatrix</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> projMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rotMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> transVect,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rotMatrixX)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Decomposes a projection matrix into a rotation matrix and a camera intrinsic matrix.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#decomposeProjectionMatrix(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">decomposeProjectionMatrix</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> projMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rotMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> transVect,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rotMatrixX,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rotMatrixY)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Decomposes a projection matrix into a rotation matrix and a camera intrinsic matrix.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#decomposeProjectionMatrix(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">decomposeProjectionMatrix</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> projMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rotMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> transVect,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rotMatrixX,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rotMatrixY,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rotMatrixZ)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Decomposes a projection matrix into a rotation matrix and a camera intrinsic matrix.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#decomposeProjectionMatrix(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">decomposeProjectionMatrix</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> projMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rotMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> transVect,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rotMatrixX,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rotMatrixY,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rotMatrixZ,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> eulerAngles)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Decomposes a projection matrix into a rotation matrix and a camera intrinsic matrix.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#drawChessboardCorners(org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.MatOfPoint2f,boolean)" class="member-name-link">drawChessboardCorners</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> image,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> patternSize,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> corners,
|
|
boolean patternWasFound)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Renders the detected chessboard corners.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#drawFrameAxes(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,float)" class="member-name-link">drawFrameAxes</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> image,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
float length)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Draw axes of the world/object coordinate system from pose estimation.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#drawFrameAxes(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,float,int)" class="member-name-link">drawFrameAxes</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> image,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
float length,
|
|
int thickness)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Draw axes of the world/object coordinate system from pose estimation.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#estimateAffine2D(org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">estimateAffine2D</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> from,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> to)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Computes an optimal affine transformation between two 2D point sets.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#estimateAffine2D(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">estimateAffine2D</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> from,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> to,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> inliers)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Computes an optimal affine transformation between two 2D point sets.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#estimateAffine2D(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int)" class="member-name-link">estimateAffine2D</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> from,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> to,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> inliers,
|
|
int method)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Computes an optimal affine transformation between two 2D point sets.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#estimateAffine2D(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,double)" class="member-name-link">estimateAffine2D</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> from,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> to,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> inliers,
|
|
int method,
|
|
double ransacReprojThreshold)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Computes an optimal affine transformation between two 2D point sets.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#estimateAffine2D(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,double,long)" class="member-name-link">estimateAffine2D</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> from,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> to,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> inliers,
|
|
int method,
|
|
double ransacReprojThreshold,
|
|
long maxIters)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Computes an optimal affine transformation between two 2D point sets.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#estimateAffine2D(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,double,long,double)" class="member-name-link">estimateAffine2D</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> from,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> to,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> inliers,
|
|
int method,
|
|
double ransacReprojThreshold,
|
|
long maxIters,
|
|
double confidence)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Computes an optimal affine transformation between two 2D point sets.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#estimateAffine2D(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,double,long,double,long)" class="member-name-link">estimateAffine2D</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> from,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> to,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> inliers,
|
|
int method,
|
|
double ransacReprojThreshold,
|
|
long maxIters,
|
|
double confidence,
|
|
long refineIters)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Computes an optimal affine transformation between two 2D point sets.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#estimateAffine2D(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.calib3d.UsacParams)" class="member-name-link">estimateAffine2D</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> pts1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> pts2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> inliers,
|
|
<a href="UsacParams.html" title="class in org.opencv.calib3d">UsacParams</a> params)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#estimateAffine3D(org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">estimateAffine3D</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> src,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dst)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Computes an optimal affine transformation between two 3D point sets.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#estimateAffine3D(org.opencv.core.Mat,org.opencv.core.Mat,double%5B%5D)" class="member-name-link">estimateAffine3D</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> src,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dst,
|
|
double[] scale)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Computes an optimal affine transformation between two 3D point sets.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#estimateAffine3D(org.opencv.core.Mat,org.opencv.core.Mat,double%5B%5D,boolean)" class="member-name-link">estimateAffine3D</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> src,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dst,
|
|
double[] scale,
|
|
boolean force_rotation)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Computes an optimal affine transformation between two 3D point sets.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static int</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#estimateAffine3D(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">estimateAffine3D</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> src,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dst,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> out,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> inliers)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Computes an optimal affine transformation between two 3D point sets.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static int</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#estimateAffine3D(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,double)" class="member-name-link">estimateAffine3D</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> src,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dst,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> out,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> inliers,
|
|
double ransacThreshold)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Computes an optimal affine transformation between two 3D point sets.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static int</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#estimateAffine3D(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,double,double)" class="member-name-link">estimateAffine3D</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> src,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dst,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> out,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> inliers,
|
|
double ransacThreshold,
|
|
double confidence)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Computes an optimal affine transformation between two 3D point sets.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#estimateAffinePartial2D(org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">estimateAffinePartial2D</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> from,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> to)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Computes an optimal limited affine transformation with 4 degrees of freedom between
|
|
two 2D point sets.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#estimateAffinePartial2D(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">estimateAffinePartial2D</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> from,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> to,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> inliers)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Computes an optimal limited affine transformation with 4 degrees of freedom between
|
|
two 2D point sets.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#estimateAffinePartial2D(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int)" class="member-name-link">estimateAffinePartial2D</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> from,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> to,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> inliers,
|
|
int method)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Computes an optimal limited affine transformation with 4 degrees of freedom between
|
|
two 2D point sets.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#estimateAffinePartial2D(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,double)" class="member-name-link">estimateAffinePartial2D</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> from,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> to,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> inliers,
|
|
int method,
|
|
double ransacReprojThreshold)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Computes an optimal limited affine transformation with 4 degrees of freedom between
|
|
two 2D point sets.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#estimateAffinePartial2D(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,double,long)" class="member-name-link">estimateAffinePartial2D</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> from,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> to,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> inliers,
|
|
int method,
|
|
double ransacReprojThreshold,
|
|
long maxIters)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Computes an optimal limited affine transformation with 4 degrees of freedom between
|
|
two 2D point sets.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#estimateAffinePartial2D(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,double,long,double)" class="member-name-link">estimateAffinePartial2D</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> from,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> to,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> inliers,
|
|
int method,
|
|
double ransacReprojThreshold,
|
|
long maxIters,
|
|
double confidence)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Computes an optimal limited affine transformation with 4 degrees of freedom between
|
|
two 2D point sets.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#estimateAffinePartial2D(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,double,long,double,long)" class="member-name-link">estimateAffinePartial2D</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> from,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> to,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> inliers,
|
|
int method,
|
|
double ransacReprojThreshold,
|
|
long maxIters,
|
|
double confidence,
|
|
long refineIters)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Computes an optimal limited affine transformation with 4 degrees of freedom between
|
|
two 2D point sets.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Scalar.html" title="class in org.opencv.core">Scalar</a></code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#estimateChessboardSharpness(org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat)" class="member-name-link">estimateChessboardSharpness</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> image,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> patternSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> corners)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Estimates the sharpness of a detected chessboard.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Scalar.html" title="class in org.opencv.core">Scalar</a></code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#estimateChessboardSharpness(org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,float)" class="member-name-link">estimateChessboardSharpness</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> image,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> patternSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> corners,
|
|
float rise_distance)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Estimates the sharpness of a detected chessboard.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Scalar.html" title="class in org.opencv.core">Scalar</a></code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#estimateChessboardSharpness(org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,float,boolean)" class="member-name-link">estimateChessboardSharpness</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> image,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> patternSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> corners,
|
|
float rise_distance,
|
|
boolean vertical)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Estimates the sharpness of a detected chessboard.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Scalar.html" title="class in org.opencv.core">Scalar</a></code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#estimateChessboardSharpness(org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,float,boolean,org.opencv.core.Mat)" class="member-name-link">estimateChessboardSharpness</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> image,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> patternSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> corners,
|
|
float rise_distance,
|
|
boolean vertical,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> sharpness)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Estimates the sharpness of a detected chessboard.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static int</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#estimateTranslation3D(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">estimateTranslation3D</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> src,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dst,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> out,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> inliers)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Computes an optimal translation between two 3D point sets.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static int</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#estimateTranslation3D(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,double)" class="member-name-link">estimateTranslation3D</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> src,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dst,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> out,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> inliers,
|
|
double ransacThreshold)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Computes an optimal translation between two 3D point sets.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static int</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#estimateTranslation3D(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,double,double)" class="member-name-link">estimateTranslation3D</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> src,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dst,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> out,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> inliers,
|
|
double ransacThreshold,
|
|
double confidence)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Computes an optimal translation between two 3D point sets.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#filterHomographyDecompByVisibleRefpoints(java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">filterHomographyDecompByVisibleRefpoints</a><wbr>(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rotations,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> normals,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> beforePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> afterPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> possibleSolutions)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Filters homography decompositions based on additional information.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#filterHomographyDecompByVisibleRefpoints(java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">filterHomographyDecompByVisibleRefpoints</a><wbr>(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rotations,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> normals,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> beforePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> afterPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> possibleSolutions,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> pointsMask)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Filters homography decompositions based on additional information.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#filterSpeckles(org.opencv.core.Mat,double,int,double)" class="member-name-link">filterSpeckles</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> img,
|
|
double newVal,
|
|
int maxSpeckleSize,
|
|
double maxDiff)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Filters off small noise blobs (speckles) in the disparity map</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#filterSpeckles(org.opencv.core.Mat,double,int,double,org.opencv.core.Mat)" class="member-name-link">filterSpeckles</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> img,
|
|
double newVal,
|
|
int maxSpeckleSize,
|
|
double maxDiff,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> buf)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Filters off small noise blobs (speckles) in the disparity map</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static boolean</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#find4QuadCornerSubpix(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size)" class="member-name-link">find4QuadCornerSubpix</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> img,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> corners,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> region_size)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static boolean</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#findChessboardCorners(org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.MatOfPoint2f)" class="member-name-link">findChessboardCorners</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> image,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> patternSize,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> corners)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Finds the positions of internal corners of the chessboard.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static boolean</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#findChessboardCorners(org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.MatOfPoint2f,int)" class="member-name-link">findChessboardCorners</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> image,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> patternSize,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> corners,
|
|
int flags)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Finds the positions of internal corners of the chessboard.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static boolean</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#findChessboardCornersSB(org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat)" class="member-name-link">findChessboardCornersSB</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> image,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> patternSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> corners)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static boolean</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#findChessboardCornersSB(org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,int)" class="member-name-link">findChessboardCornersSB</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> image,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> patternSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> corners,
|
|
int flags)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static boolean</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#findChessboardCornersSBWithMeta(org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,int,org.opencv.core.Mat)" class="member-name-link">findChessboardCornersSBWithMeta</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> image,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> patternSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> corners,
|
|
int flags,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> meta)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Finds the positions of internal corners of the chessboard using a sector based approach.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static boolean</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#findCirclesGrid(org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat)" class="member-name-link">findCirclesGrid</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> image,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> patternSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> centers)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static boolean</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#findCirclesGrid(org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,int)" class="member-name-link">findCirclesGrid</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> image,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> patternSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> centers,
|
|
int flags)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#findEssentialMat(org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">findEssentialMat</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#findEssentialMat(org.opencv.core.Mat,org.opencv.core.Mat,double)" class="member-name-link">findEssentialMat</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
double focal)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#findEssentialMat(org.opencv.core.Mat,org.opencv.core.Mat,double,org.opencv.core.Point)" class="member-name-link">findEssentialMat</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
double focal,
|
|
<a href="../core/Point.html" title="class in org.opencv.core">Point</a> pp)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#findEssentialMat(org.opencv.core.Mat,org.opencv.core.Mat,double,org.opencv.core.Point,int)" class="member-name-link">findEssentialMat</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
double focal,
|
|
<a href="../core/Point.html" title="class in org.opencv.core">Point</a> pp,
|
|
int method)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#findEssentialMat(org.opencv.core.Mat,org.opencv.core.Mat,double,org.opencv.core.Point,int,double)" class="member-name-link">findEssentialMat</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
double focal,
|
|
<a href="../core/Point.html" title="class in org.opencv.core">Point</a> pp,
|
|
int method,
|
|
double prob)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#findEssentialMat(org.opencv.core.Mat,org.opencv.core.Mat,double,org.opencv.core.Point,int,double,double)" class="member-name-link">findEssentialMat</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
double focal,
|
|
<a href="../core/Point.html" title="class in org.opencv.core">Point</a> pp,
|
|
int method,
|
|
double prob,
|
|
double threshold)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#findEssentialMat(org.opencv.core.Mat,org.opencv.core.Mat,double,org.opencv.core.Point,int,double,double,int)" class="member-name-link">findEssentialMat</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
double focal,
|
|
<a href="../core/Point.html" title="class in org.opencv.core">Point</a> pp,
|
|
int method,
|
|
double prob,
|
|
double threshold,
|
|
int maxIters)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#findEssentialMat(org.opencv.core.Mat,org.opencv.core.Mat,double,org.opencv.core.Point,int,double,double,int,org.opencv.core.Mat)" class="member-name-link">findEssentialMat</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
double focal,
|
|
<a href="../core/Point.html" title="class in org.opencv.core">Point</a> pp,
|
|
int method,
|
|
double prob,
|
|
double threshold,
|
|
int maxIters,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> mask)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#findEssentialMat(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">findEssentialMat</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Calculates an essential matrix from the corresponding points in two images.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#findEssentialMat(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int)" class="member-name-link">findEssentialMat</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
int method)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Calculates an essential matrix from the corresponding points in two images.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#findEssentialMat(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,double)" class="member-name-link">findEssentialMat</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
int method,
|
|
double prob)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Calculates an essential matrix from the corresponding points in two images.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#findEssentialMat(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,double,double)" class="member-name-link">findEssentialMat</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
int method,
|
|
double prob,
|
|
double threshold)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Calculates an essential matrix from the corresponding points in two images.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#findEssentialMat(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,double,double,int)" class="member-name-link">findEssentialMat</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
int method,
|
|
double prob,
|
|
double threshold,
|
|
int maxIters)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Calculates an essential matrix from the corresponding points in two images.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#findEssentialMat(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,double,double,int,org.opencv.core.Mat)" class="member-name-link">findEssentialMat</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
int method,
|
|
double prob,
|
|
double threshold,
|
|
int maxIters,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> mask)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Calculates an essential matrix from the corresponding points in two images.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#findEssentialMat(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">findEssentialMat</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs2)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Calculates an essential matrix from the corresponding points in two images from potentially two different cameras.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#findEssentialMat(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int)" class="member-name-link">findEssentialMat</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs2,
|
|
int method)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Calculates an essential matrix from the corresponding points in two images from potentially two different cameras.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#findEssentialMat(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,double)" class="member-name-link">findEssentialMat</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs2,
|
|
int method,
|
|
double prob)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Calculates an essential matrix from the corresponding points in two images from potentially two different cameras.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#findEssentialMat(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,double,double)" class="member-name-link">findEssentialMat</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs2,
|
|
int method,
|
|
double prob,
|
|
double threshold)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Calculates an essential matrix from the corresponding points in two images from potentially two different cameras.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#findEssentialMat(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,double,double,org.opencv.core.Mat)" class="member-name-link">findEssentialMat</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs2,
|
|
int method,
|
|
double prob,
|
|
double threshold,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> mask)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Calculates an essential matrix from the corresponding points in two images from potentially two different cameras.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#findEssentialMat(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.calib3d.UsacParams)" class="member-name-link">findEssentialMat</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dist_coeff1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dist_coeff2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> mask,
|
|
<a href="UsacParams.html" title="class in org.opencv.calib3d">UsacParams</a> params)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#findFundamentalMat(org.opencv.core.MatOfPoint2f,org.opencv.core.MatOfPoint2f)" class="member-name-link">findFundamentalMat</a><wbr>(<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> points1,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> points2)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#findFundamentalMat(org.opencv.core.MatOfPoint2f,org.opencv.core.MatOfPoint2f,int)" class="member-name-link">findFundamentalMat</a><wbr>(<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> points1,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> points2,
|
|
int method)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#findFundamentalMat(org.opencv.core.MatOfPoint2f,org.opencv.core.MatOfPoint2f,int,double)" class="member-name-link">findFundamentalMat</a><wbr>(<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> points1,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> points2,
|
|
int method,
|
|
double ransacReprojThreshold)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#findFundamentalMat(org.opencv.core.MatOfPoint2f,org.opencv.core.MatOfPoint2f,int,double,double)" class="member-name-link">findFundamentalMat</a><wbr>(<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> points1,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> points2,
|
|
int method,
|
|
double ransacReprojThreshold,
|
|
double confidence)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#findFundamentalMat(org.opencv.core.MatOfPoint2f,org.opencv.core.MatOfPoint2f,int,double,double,int)" class="member-name-link">findFundamentalMat</a><wbr>(<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> points1,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> points2,
|
|
int method,
|
|
double ransacReprojThreshold,
|
|
double confidence,
|
|
int maxIters)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Calculates a fundamental matrix from the corresponding points in two images.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#findFundamentalMat(org.opencv.core.MatOfPoint2f,org.opencv.core.MatOfPoint2f,int,double,double,int,org.opencv.core.Mat)" class="member-name-link">findFundamentalMat</a><wbr>(<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> points1,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> points2,
|
|
int method,
|
|
double ransacReprojThreshold,
|
|
double confidence,
|
|
int maxIters,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> mask)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Calculates a fundamental matrix from the corresponding points in two images.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#findFundamentalMat(org.opencv.core.MatOfPoint2f,org.opencv.core.MatOfPoint2f,int,double,double,org.opencv.core.Mat)" class="member-name-link">findFundamentalMat</a><wbr>(<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> points1,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> points2,
|
|
int method,
|
|
double ransacReprojThreshold,
|
|
double confidence,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> mask)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#findFundamentalMat(org.opencv.core.MatOfPoint2f,org.opencv.core.MatOfPoint2f,org.opencv.core.Mat,org.opencv.calib3d.UsacParams)" class="member-name-link">findFundamentalMat</a><wbr>(<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> points1,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> mask,
|
|
<a href="UsacParams.html" title="class in org.opencv.calib3d">UsacParams</a> params)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#findHomography(org.opencv.core.MatOfPoint2f,org.opencv.core.MatOfPoint2f)" class="member-name-link">findHomography</a><wbr>(<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> srcPoints,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> dstPoints)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Finds a perspective transformation between two planes.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#findHomography(org.opencv.core.MatOfPoint2f,org.opencv.core.MatOfPoint2f,int)" class="member-name-link">findHomography</a><wbr>(<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> srcPoints,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> dstPoints,
|
|
int method)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Finds a perspective transformation between two planes.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#findHomography(org.opencv.core.MatOfPoint2f,org.opencv.core.MatOfPoint2f,int,double)" class="member-name-link">findHomography</a><wbr>(<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> srcPoints,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> dstPoints,
|
|
int method,
|
|
double ransacReprojThreshold)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Finds a perspective transformation between two planes.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#findHomography(org.opencv.core.MatOfPoint2f,org.opencv.core.MatOfPoint2f,int,double,org.opencv.core.Mat)" class="member-name-link">findHomography</a><wbr>(<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> srcPoints,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> dstPoints,
|
|
int method,
|
|
double ransacReprojThreshold,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> mask)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Finds a perspective transformation between two planes.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#findHomography(org.opencv.core.MatOfPoint2f,org.opencv.core.MatOfPoint2f,int,double,org.opencv.core.Mat,int)" class="member-name-link">findHomography</a><wbr>(<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> srcPoints,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> dstPoints,
|
|
int method,
|
|
double ransacReprojThreshold,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> mask,
|
|
int maxIters)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Finds a perspective transformation between two planes.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#findHomography(org.opencv.core.MatOfPoint2f,org.opencv.core.MatOfPoint2f,int,double,org.opencv.core.Mat,int,double)" class="member-name-link">findHomography</a><wbr>(<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> srcPoints,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> dstPoints,
|
|
int method,
|
|
double ransacReprojThreshold,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> mask,
|
|
int maxIters,
|
|
double confidence)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Finds a perspective transformation between two planes.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#findHomography(org.opencv.core.MatOfPoint2f,org.opencv.core.MatOfPoint2f,org.opencv.core.Mat,org.opencv.calib3d.UsacParams)" class="member-name-link">findHomography</a><wbr>(<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> srcPoints,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> dstPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> mask,
|
|
<a href="UsacParams.html" title="class in org.opencv.calib3d">UsacParams</a> params)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static double</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#fisheye_calibrate(java.util.List,java.util.List,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List)" class="member-name-link">fisheye_calibrate</a><wbr>(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> image_size,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Performs camera calibration</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static double</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#fisheye_calibrate(java.util.List,java.util.List,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List,int)" class="member-name-link">fisheye_calibrate</a><wbr>(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> image_size,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs,
|
|
int flags)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Performs camera calibration</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static double</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#fisheye_calibrate(java.util.List,java.util.List,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List,int,org.opencv.core.TermCriteria)" class="member-name-link">fisheye_calibrate</a><wbr>(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> image_size,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs,
|
|
int flags,
|
|
<a href="../core/TermCriteria.html" title="class in org.opencv.core">TermCriteria</a> criteria)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Performs camera calibration</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#fisheye_distortPoints(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">fisheye_distortPoints</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> undistorted,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distorted,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Distorts 2D points using fisheye model.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#fisheye_distortPoints(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,double)" class="member-name-link">fisheye_distortPoints</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> undistorted,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distorted,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D,
|
|
double alpha)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Distorts 2D points using fisheye model.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#fisheye_distortPoints(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">fisheye_distortPoints</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> undistorted,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distorted,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> Kundistorted,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Overload of distortPoints function to handle cases when undistorted points are obtained with non-identity
|
|
camera matrix, e.g.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#fisheye_distortPoints(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,double)" class="member-name-link">fisheye_distortPoints</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> undistorted,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distorted,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> Kundistorted,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D,
|
|
double alpha)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Overload of distortPoints function to handle cases when undistorted points are obtained with non-identity
|
|
camera matrix, e.g.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#fisheye_estimateNewCameraMatrixForUndistortRectify(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">fisheye_estimateNewCameraMatrixForUndistortRectify</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> image_size,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Estimates new camera intrinsic matrix for undistortion or rectification.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#fisheye_estimateNewCameraMatrixForUndistortRectify(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,double)" class="member-name-link">fisheye_estimateNewCameraMatrixForUndistortRectify</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> image_size,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P,
|
|
double balance)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Estimates new camera intrinsic matrix for undistortion or rectification.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#fisheye_estimateNewCameraMatrixForUndistortRectify(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,double,org.opencv.core.Size)" class="member-name-link">fisheye_estimateNewCameraMatrixForUndistortRectify</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> image_size,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P,
|
|
double balance,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> new_size)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Estimates new camera intrinsic matrix for undistortion or rectification.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#fisheye_estimateNewCameraMatrixForUndistortRectify(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,double,org.opencv.core.Size,double)" class="member-name-link">fisheye_estimateNewCameraMatrixForUndistortRectify</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> image_size,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P,
|
|
double balance,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> new_size,
|
|
double fov_scale)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Estimates new camera intrinsic matrix for undistortion or rectification.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#fisheye_initUndistortRectifyMap(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,int,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">fisheye_initUndistortRectifyMap</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> size,
|
|
int m1type,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> map1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> map2)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Computes undistortion and rectification maps for image transform by #remap.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#fisheye_projectPoints(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">fisheye_projectPoints</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#fisheye_projectPoints(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,double)" class="member-name-link">fisheye_projectPoints</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D,
|
|
double alpha)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#fisheye_projectPoints(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,double,org.opencv.core.Mat)" class="member-name-link">fisheye_projectPoints</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D,
|
|
double alpha,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> jacobian)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static boolean</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#fisheye_solvePnP(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">fisheye_solvePnP</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Finds an object pose from 3D-2D point correspondences for fisheye camera moodel.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static boolean</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#fisheye_solvePnP(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,boolean)" class="member-name-link">fisheye_solvePnP</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
boolean useExtrinsicGuess)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Finds an object pose from 3D-2D point correspondences for fisheye camera moodel.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static boolean</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#fisheye_solvePnP(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,boolean,int)" class="member-name-link">fisheye_solvePnP</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
boolean useExtrinsicGuess,
|
|
int flags)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Finds an object pose from 3D-2D point correspondences for fisheye camera moodel.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static boolean</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#fisheye_solvePnP(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,boolean,int,org.opencv.core.TermCriteria)" class="member-name-link">fisheye_solvePnP</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
boolean useExtrinsicGuess,
|
|
int flags,
|
|
<a href="../core/TermCriteria.html" title="class in org.opencv.core">TermCriteria</a> criteria)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Finds an object pose from 3D-2D point correspondences for fisheye camera moodel.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static boolean</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#fisheye_solvePnPRansac(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">fisheye_solvePnPRansac</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Finds an object pose from 3D-2D point correspondences using the RANSAC scheme for fisheye camera moodel.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static boolean</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#fisheye_solvePnPRansac(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,boolean)" class="member-name-link">fisheye_solvePnPRansac</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
boolean useExtrinsicGuess)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Finds an object pose from 3D-2D point correspondences using the RANSAC scheme for fisheye camera moodel.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static boolean</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#fisheye_solvePnPRansac(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,boolean,int)" class="member-name-link">fisheye_solvePnPRansac</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
boolean useExtrinsicGuess,
|
|
int iterationsCount)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Finds an object pose from 3D-2D point correspondences using the RANSAC scheme for fisheye camera moodel.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static boolean</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#fisheye_solvePnPRansac(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,boolean,int,float)" class="member-name-link">fisheye_solvePnPRansac</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
boolean useExtrinsicGuess,
|
|
int iterationsCount,
|
|
float reprojectionError)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Finds an object pose from 3D-2D point correspondences using the RANSAC scheme for fisheye camera moodel.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static boolean</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#fisheye_solvePnPRansac(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,boolean,int,float,double)" class="member-name-link">fisheye_solvePnPRansac</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
boolean useExtrinsicGuess,
|
|
int iterationsCount,
|
|
float reprojectionError,
|
|
double confidence)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Finds an object pose from 3D-2D point correspondences using the RANSAC scheme for fisheye camera moodel.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static boolean</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#fisheye_solvePnPRansac(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,boolean,int,float,double,org.opencv.core.Mat)" class="member-name-link">fisheye_solvePnPRansac</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
boolean useExtrinsicGuess,
|
|
int iterationsCount,
|
|
float reprojectionError,
|
|
double confidence,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> inliers)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Finds an object pose from 3D-2D point correspondences using the RANSAC scheme for fisheye camera moodel.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static boolean</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#fisheye_solvePnPRansac(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,boolean,int,float,double,org.opencv.core.Mat,int)" class="member-name-link">fisheye_solvePnPRansac</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
boolean useExtrinsicGuess,
|
|
int iterationsCount,
|
|
float reprojectionError,
|
|
double confidence,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> inliers,
|
|
int flags)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Finds an object pose from 3D-2D point correspondences using the RANSAC scheme for fisheye camera moodel.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static boolean</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#fisheye_solvePnPRansac(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,boolean,int,float,double,org.opencv.core.Mat,int,org.opencv.core.TermCriteria)" class="member-name-link">fisheye_solvePnPRansac</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
boolean useExtrinsicGuess,
|
|
int iterationsCount,
|
|
float reprojectionError,
|
|
double confidence,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> inliers,
|
|
int flags,
|
|
<a href="../core/TermCriteria.html" title="class in org.opencv.core">TermCriteria</a> criteria)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Finds an object pose from 3D-2D point correspondences using the RANSAC scheme for fisheye camera moodel.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static double</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#fisheye_stereoCalibrate(java.util.List,java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">fisheye_stereoCalibrate</a><wbr>(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints1,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D2,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> T)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static double</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#fisheye_stereoCalibrate(java.util.List,java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,int)" class="member-name-link">fisheye_stereoCalibrate</a><wbr>(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints1,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D2,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> T,
|
|
int flags)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static double</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#fisheye_stereoCalibrate(java.util.List,java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,int,org.opencv.core.TermCriteria)" class="member-name-link">fisheye_stereoCalibrate</a><wbr>(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints1,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D2,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> T,
|
|
int flags,
|
|
<a href="../core/TermCriteria.html" title="class in org.opencv.core">TermCriteria</a> criteria)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static double</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#fisheye_stereoCalibrate(java.util.List,java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List)" class="member-name-link">fisheye_stereoCalibrate</a><wbr>(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints1,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D2,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> T,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Performs stereo calibration</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static double</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#fisheye_stereoCalibrate(java.util.List,java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List,int)" class="member-name-link">fisheye_stereoCalibrate</a><wbr>(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints1,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D2,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> T,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs,
|
|
int flags)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Performs stereo calibration</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static double</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#fisheye_stereoCalibrate(java.util.List,java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List,int,org.opencv.core.TermCriteria)" class="member-name-link">fisheye_stereoCalibrate</a><wbr>(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints1,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D2,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> T,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs,
|
|
int flags,
|
|
<a href="../core/TermCriteria.html" title="class in org.opencv.core">TermCriteria</a> criteria)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Performs stereo calibration</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#fisheye_stereoRectify(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int)" class="member-name-link">fisheye_stereoRectify</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D2,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> Q,
|
|
int flags)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Stereo rectification for fisheye camera model</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#fisheye_stereoRectify(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,org.opencv.core.Size)" class="member-name-link">fisheye_stereoRectify</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D2,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> Q,
|
|
int flags,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> newImageSize)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Stereo rectification for fisheye camera model</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#fisheye_stereoRectify(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,org.opencv.core.Size,double)" class="member-name-link">fisheye_stereoRectify</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D2,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> Q,
|
|
int flags,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> newImageSize,
|
|
double balance)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Stereo rectification for fisheye camera model</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#fisheye_stereoRectify(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,org.opencv.core.Size,double,double)" class="member-name-link">fisheye_stereoRectify</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D2,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> Q,
|
|
int flags,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> newImageSize,
|
|
double balance,
|
|
double fov_scale)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Stereo rectification for fisheye camera model</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#fisheye_undistortImage(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">fisheye_undistortImage</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distorted,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> undistorted,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Transforms an image to compensate for fisheye lens distortion.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#fisheye_undistortImage(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">fisheye_undistortImage</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distorted,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> undistorted,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> Knew)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Transforms an image to compensate for fisheye lens distortion.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#fisheye_undistortImage(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size)" class="member-name-link">fisheye_undistortImage</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distorted,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> undistorted,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> Knew,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> new_size)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Transforms an image to compensate for fisheye lens distortion.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#fisheye_undistortPoints(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">fisheye_undistortPoints</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distorted,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> undistorted,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Undistorts 2D points using fisheye model</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#fisheye_undistortPoints(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">fisheye_undistortPoints</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distorted,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> undistorted,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Undistorts 2D points using fisheye model</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#fisheye_undistortPoints(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">fisheye_undistortPoints</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distorted,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> undistorted,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Undistorts 2D points using fisheye model</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#fisheye_undistortPoints(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.TermCriteria)" class="member-name-link">fisheye_undistortPoints</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distorted,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> undistorted,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P,
|
|
<a href="../core/TermCriteria.html" title="class in org.opencv.core">TermCriteria</a> criteria)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Undistorts 2D points using fisheye model</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#getDefaultNewCameraMatrix(org.opencv.core.Mat)" class="member-name-link">getDefaultNewCameraMatrix</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Returns the default new camera matrix.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#getDefaultNewCameraMatrix(org.opencv.core.Mat,org.opencv.core.Size)" class="member-name-link">getDefaultNewCameraMatrix</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imgsize)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Returns the default new camera matrix.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#getDefaultNewCameraMatrix(org.opencv.core.Mat,org.opencv.core.Size,boolean)" class="member-name-link">getDefaultNewCameraMatrix</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imgsize,
|
|
boolean centerPrincipalPoint)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Returns the default new camera matrix.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#getOptimalNewCameraMatrix(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,double)" class="member-name-link">getOptimalNewCameraMatrix</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
double alpha)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Returns the new camera intrinsic matrix based on the free scaling parameter.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#getOptimalNewCameraMatrix(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,double,org.opencv.core.Size)" class="member-name-link">getOptimalNewCameraMatrix</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
double alpha,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> newImgSize)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Returns the new camera intrinsic matrix based on the free scaling parameter.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#getOptimalNewCameraMatrix(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,double,org.opencv.core.Size,org.opencv.core.Rect)" class="member-name-link">getOptimalNewCameraMatrix</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
double alpha,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> newImgSize,
|
|
<a href="../core/Rect.html" title="class in org.opencv.core">Rect</a> validPixROI)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Returns the new camera intrinsic matrix based on the free scaling parameter.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#getOptimalNewCameraMatrix(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,double,org.opencv.core.Size,org.opencv.core.Rect,boolean)" class="member-name-link">getOptimalNewCameraMatrix</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
double alpha,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> newImgSize,
|
|
<a href="../core/Rect.html" title="class in org.opencv.core">Rect</a> validPixROI,
|
|
boolean centerPrincipalPoint)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Returns the new camera intrinsic matrix based on the free scaling parameter.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Rect.html" title="class in org.opencv.core">Rect</a></code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#getValidDisparityROI(org.opencv.core.Rect,org.opencv.core.Rect,int,int,int)" class="member-name-link">getValidDisparityROI</a><wbr>(<a href="../core/Rect.html" title="class in org.opencv.core">Rect</a> roi1,
|
|
<a href="../core/Rect.html" title="class in org.opencv.core">Rect</a> roi2,
|
|
int minDisparity,
|
|
int numberOfDisparities,
|
|
int blockSize)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#initCameraMatrix2D(java.util.List,java.util.List,org.opencv.core.Size)" class="member-name-link">initCameraMatrix2D</a><wbr>(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/MatOfPoint3f.html" title="class in org.opencv.core">MatOfPoint3f</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a>> imagePoints,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Finds an initial camera intrinsic matrix from 3D-2D point correspondences.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static <a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#initCameraMatrix2D(java.util.List,java.util.List,org.opencv.core.Size,double)" class="member-name-link">initCameraMatrix2D</a><wbr>(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/MatOfPoint3f.html" title="class in org.opencv.core">MatOfPoint3f</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a>> imagePoints,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
double aspectRatio)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Finds an initial camera intrinsic matrix from 3D-2D point correspondences.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#initInverseRectificationMap(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,int,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">initInverseRectificationMap</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> newCameraMatrix,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> size,
|
|
int m1type,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> map1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> map2)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Computes the projection and inverse-rectification transformation map.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#initUndistortRectifyMap(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,int,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">initUndistortRectifyMap</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> newCameraMatrix,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> size,
|
|
int m1type,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> map1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> map2)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Computes the undistortion and rectification transformation map.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#matMulDeriv(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">matMulDeriv</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> A,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> B,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dABdA,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dABdB)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Computes partial derivatives of the matrix product for each multiplied matrix.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#projectPoints(org.opencv.core.MatOfPoint3f,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.MatOfDouble,org.opencv.core.MatOfPoint2f)" class="member-name-link">projectPoints</a><wbr>(<a href="../core/MatOfPoint3f.html" title="class in org.opencv.core">MatOfPoint3f</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/MatOfDouble.html" title="class in org.opencv.core">MatOfDouble</a> distCoeffs,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> imagePoints)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Projects 3D points to an image plane.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#projectPoints(org.opencv.core.MatOfPoint3f,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.MatOfDouble,org.opencv.core.MatOfPoint2f,org.opencv.core.Mat)" class="member-name-link">projectPoints</a><wbr>(<a href="../core/MatOfPoint3f.html" title="class in org.opencv.core">MatOfPoint3f</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/MatOfDouble.html" title="class in org.opencv.core">MatOfDouble</a> distCoeffs,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> jacobian)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Projects 3D points to an image plane.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#projectPoints(org.opencv.core.MatOfPoint3f,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.MatOfDouble,org.opencv.core.MatOfPoint2f,org.opencv.core.Mat,double)" class="member-name-link">projectPoints</a><wbr>(<a href="../core/MatOfPoint3f.html" title="class in org.opencv.core">MatOfPoint3f</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/MatOfDouble.html" title="class in org.opencv.core">MatOfDouble</a> distCoeffs,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> jacobian,
|
|
double aspectRatio)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Projects 3D points to an image plane.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static int</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#recoverPose(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">recoverPose</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> E,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> t)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static int</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#recoverPose(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,double)" class="member-name-link">recoverPose</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> E,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> t,
|
|
double focal)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static int</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#recoverPose(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,double,org.opencv.core.Point)" class="member-name-link">recoverPose</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> E,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> t,
|
|
double focal,
|
|
<a href="../core/Point.html" title="class in org.opencv.core">Point</a> pp)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static int</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#recoverPose(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,double,org.opencv.core.Point,org.opencv.core.Mat)" class="member-name-link">recoverPose</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> E,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> t,
|
|
double focal,
|
|
<a href="../core/Point.html" title="class in org.opencv.core">Point</a> pp,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> mask)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static int</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#recoverPose(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">recoverPose</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> E,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> t)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Recovers the relative camera rotation and the translation from an estimated essential
|
|
matrix and the corresponding points in two images, using chirality check.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static int</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#recoverPose(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,double)" class="member-name-link">recoverPose</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> E,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> t,
|
|
double distanceThresh)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static int</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#recoverPose(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,double,org.opencv.core.Mat)" class="member-name-link">recoverPose</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> E,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> t,
|
|
double distanceThresh,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> mask)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static int</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#recoverPose(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,double,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">recoverPose</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> E,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> t,
|
|
double distanceThresh,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> mask,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> triangulatedPoints)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static int</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#recoverPose(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">recoverPose</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> E,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> t,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> mask)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Recovers the relative camera rotation and the translation from an estimated essential
|
|
matrix and the corresponding points in two images, using chirality check.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static int</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#recoverPose(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">recoverPose</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> E,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> t)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Recovers the relative camera rotation and the translation from corresponding points in two images from two different cameras, using cheirality check.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static int</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#recoverPose(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int)" class="member-name-link">recoverPose</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> E,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> t,
|
|
int method)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Recovers the relative camera rotation and the translation from corresponding points in two images from two different cameras, using cheirality check.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static int</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#recoverPose(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,double)" class="member-name-link">recoverPose</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> E,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> t,
|
|
int method,
|
|
double prob)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Recovers the relative camera rotation and the translation from corresponding points in two images from two different cameras, using cheirality check.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static int</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#recoverPose(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,double,double)" class="member-name-link">recoverPose</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> E,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> t,
|
|
int method,
|
|
double prob,
|
|
double threshold)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Recovers the relative camera rotation and the translation from corresponding points in two images from two different cameras, using cheirality check.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static int</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#recoverPose(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,double,double,org.opencv.core.Mat)" class="member-name-link">recoverPose</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> E,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> t,
|
|
int method,
|
|
double prob,
|
|
double threshold,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> mask)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Recovers the relative camera rotation and the translation from corresponding points in two images from two different cameras, using cheirality check.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static float</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#rectify3Collinear(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,double,org.opencv.core.Size,org.opencv.core.Rect,org.opencv.core.Rect,int)" class="member-name-link">rectify3Collinear</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix3,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs3,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imgpt1,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imgpt3,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R12,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> T12,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R13,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> T13,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R3,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P3,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> Q,
|
|
double alpha,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> newImgSize,
|
|
<a href="../core/Rect.html" title="class in org.opencv.core">Rect</a> roi1,
|
|
<a href="../core/Rect.html" title="class in org.opencv.core">Rect</a> roi2,
|
|
int flags)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#reprojectImageTo3D(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">reprojectImageTo3D</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> disparity,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> _3dImage,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> Q)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Reprojects a disparity image to 3D space.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#reprojectImageTo3D(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,boolean)" class="member-name-link">reprojectImageTo3D</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> disparity,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> _3dImage,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> Q,
|
|
boolean handleMissingValues)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Reprojects a disparity image to 3D space.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#reprojectImageTo3D(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,boolean,int)" class="member-name-link">reprojectImageTo3D</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> disparity,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> _3dImage,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> Q,
|
|
boolean handleMissingValues,
|
|
int ddepth)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Reprojects a disparity image to 3D space.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#Rodrigues(org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">Rodrigues</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> src,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dst)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Converts a rotation matrix to a rotation vector or vice versa.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#Rodrigues(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">Rodrigues</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> src,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dst,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> jacobian)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Converts a rotation matrix to a rotation vector or vice versa.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static double[]</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#RQDecomp3x3(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">RQDecomp3x3</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> src,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> mtxR,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> mtxQ)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Computes an RQ decomposition of 3x3 matrices.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static double[]</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#RQDecomp3x3(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">RQDecomp3x3</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> src,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> mtxR,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> mtxQ,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> Qx)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Computes an RQ decomposition of 3x3 matrices.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static double[]</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#RQDecomp3x3(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">RQDecomp3x3</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> src,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> mtxR,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> mtxQ,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> Qx,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> Qy)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Computes an RQ decomposition of 3x3 matrices.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static double[]</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#RQDecomp3x3(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">RQDecomp3x3</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> src,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> mtxR,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> mtxQ,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> Qx,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> Qy,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> Qz)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Computes an RQ decomposition of 3x3 matrices.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static double</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#sampsonDistance(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">sampsonDistance</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> pt1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> pt2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> F)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Calculates the Sampson Distance between two points.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static int</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#solveP3P(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List,int)" class="member-name-link">solveP3P</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs,
|
|
int flags)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Finds an object pose \( {}^{c}\mathbf{T}_o \) from <b>3</b> 3D-2D point correspondences.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static boolean</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#solvePnP(org.opencv.core.MatOfPoint3f,org.opencv.core.MatOfPoint2f,org.opencv.core.Mat,org.opencv.core.MatOfDouble,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">solvePnP</a><wbr>(<a href="../core/MatOfPoint3f.html" title="class in org.opencv.core">MatOfPoint3f</a> objectPoints,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/MatOfDouble.html" title="class in org.opencv.core">MatOfDouble</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Finds an object pose \( {}^{c}\mathbf{T}_o \) from 3D-2D point correspondences:
|
|
|
|
{ width=50% }
|
|
|
|
SEE: REF: calib3d_solvePnP
|
|
|
|
This function returns the rotation and the translation vectors that transform a 3D point expressed in the object
|
|
coordinate frame to the camera coordinate frame, using different methods:
|
|
|
|
|
|
P3P methods (REF: SOLVEPNP_P3P, REF: SOLVEPNP_AP3P): need 4 input points to return a unique solution.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static boolean</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#solvePnP(org.opencv.core.MatOfPoint3f,org.opencv.core.MatOfPoint2f,org.opencv.core.Mat,org.opencv.core.MatOfDouble,org.opencv.core.Mat,org.opencv.core.Mat,boolean)" class="member-name-link">solvePnP</a><wbr>(<a href="../core/MatOfPoint3f.html" title="class in org.opencv.core">MatOfPoint3f</a> objectPoints,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/MatOfDouble.html" title="class in org.opencv.core">MatOfDouble</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
boolean useExtrinsicGuess)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Finds an object pose \( {}^{c}\mathbf{T}_o \) from 3D-2D point correspondences:
|
|
|
|
{ width=50% }
|
|
|
|
SEE: REF: calib3d_solvePnP
|
|
|
|
This function returns the rotation and the translation vectors that transform a 3D point expressed in the object
|
|
coordinate frame to the camera coordinate frame, using different methods:
|
|
|
|
|
|
P3P methods (REF: SOLVEPNP_P3P, REF: SOLVEPNP_AP3P): need 4 input points to return a unique solution.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static boolean</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#solvePnP(org.opencv.core.MatOfPoint3f,org.opencv.core.MatOfPoint2f,org.opencv.core.Mat,org.opencv.core.MatOfDouble,org.opencv.core.Mat,org.opencv.core.Mat,boolean,int)" class="member-name-link">solvePnP</a><wbr>(<a href="../core/MatOfPoint3f.html" title="class in org.opencv.core">MatOfPoint3f</a> objectPoints,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/MatOfDouble.html" title="class in org.opencv.core">MatOfDouble</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
boolean useExtrinsicGuess,
|
|
int flags)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Finds an object pose \( {}^{c}\mathbf{T}_o \) from 3D-2D point correspondences:
|
|
|
|
{ width=50% }
|
|
|
|
SEE: REF: calib3d_solvePnP
|
|
|
|
This function returns the rotation and the translation vectors that transform a 3D point expressed in the object
|
|
coordinate frame to the camera coordinate frame, using different methods:
|
|
|
|
|
|
P3P methods (REF: SOLVEPNP_P3P, REF: SOLVEPNP_AP3P): need 4 input points to return a unique solution.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static int</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#solvePnPGeneric(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List)" class="member-name-link">solvePnPGeneric</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Finds an object pose \( {}^{c}\mathbf{T}_o \) from 3D-2D point correspondences.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static int</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#solvePnPGeneric(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List,boolean)" class="member-name-link">solvePnPGeneric</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs,
|
|
boolean useExtrinsicGuess)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Finds an object pose \( {}^{c}\mathbf{T}_o \) from 3D-2D point correspondences.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static int</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#solvePnPGeneric(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List,boolean,int)" class="member-name-link">solvePnPGeneric</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs,
|
|
boolean useExtrinsicGuess,
|
|
int flags)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Finds an object pose \( {}^{c}\mathbf{T}_o \) from 3D-2D point correspondences.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static int</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#solvePnPGeneric(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List,boolean,int,org.opencv.core.Mat)" class="member-name-link">solvePnPGeneric</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs,
|
|
boolean useExtrinsicGuess,
|
|
int flags,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Finds an object pose \( {}^{c}\mathbf{T}_o \) from 3D-2D point correspondences.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static int</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#solvePnPGeneric(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List,boolean,int,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">solvePnPGeneric</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs,
|
|
boolean useExtrinsicGuess,
|
|
int flags,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Finds an object pose \( {}^{c}\mathbf{T}_o \) from 3D-2D point correspondences.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static int</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#solvePnPGeneric(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List,boolean,int,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">solvePnPGeneric</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs,
|
|
boolean useExtrinsicGuess,
|
|
int flags,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> reprojectionError)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Finds an object pose \( {}^{c}\mathbf{T}_o \) from 3D-2D point correspondences.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static boolean</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#solvePnPRansac(org.opencv.core.MatOfPoint3f,org.opencv.core.MatOfPoint2f,org.opencv.core.Mat,org.opencv.core.MatOfDouble,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">solvePnPRansac</a><wbr>(<a href="../core/MatOfPoint3f.html" title="class in org.opencv.core">MatOfPoint3f</a> objectPoints,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/MatOfDouble.html" title="class in org.opencv.core">MatOfDouble</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Finds an object pose \( {}^{c}\mathbf{T}_o \) from 3D-2D point correspondences using the RANSAC scheme to deal with bad matches.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static boolean</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#solvePnPRansac(org.opencv.core.MatOfPoint3f,org.opencv.core.MatOfPoint2f,org.opencv.core.Mat,org.opencv.core.MatOfDouble,org.opencv.core.Mat,org.opencv.core.Mat,boolean)" class="member-name-link">solvePnPRansac</a><wbr>(<a href="../core/MatOfPoint3f.html" title="class in org.opencv.core">MatOfPoint3f</a> objectPoints,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/MatOfDouble.html" title="class in org.opencv.core">MatOfDouble</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
boolean useExtrinsicGuess)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Finds an object pose \( {}^{c}\mathbf{T}_o \) from 3D-2D point correspondences using the RANSAC scheme to deal with bad matches.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static boolean</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#solvePnPRansac(org.opencv.core.MatOfPoint3f,org.opencv.core.MatOfPoint2f,org.opencv.core.Mat,org.opencv.core.MatOfDouble,org.opencv.core.Mat,org.opencv.core.Mat,boolean,int)" class="member-name-link">solvePnPRansac</a><wbr>(<a href="../core/MatOfPoint3f.html" title="class in org.opencv.core">MatOfPoint3f</a> objectPoints,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/MatOfDouble.html" title="class in org.opencv.core">MatOfDouble</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
boolean useExtrinsicGuess,
|
|
int iterationsCount)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Finds an object pose \( {}^{c}\mathbf{T}_o \) from 3D-2D point correspondences using the RANSAC scheme to deal with bad matches.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static boolean</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#solvePnPRansac(org.opencv.core.MatOfPoint3f,org.opencv.core.MatOfPoint2f,org.opencv.core.Mat,org.opencv.core.MatOfDouble,org.opencv.core.Mat,org.opencv.core.Mat,boolean,int,float)" class="member-name-link">solvePnPRansac</a><wbr>(<a href="../core/MatOfPoint3f.html" title="class in org.opencv.core">MatOfPoint3f</a> objectPoints,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/MatOfDouble.html" title="class in org.opencv.core">MatOfDouble</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
boolean useExtrinsicGuess,
|
|
int iterationsCount,
|
|
float reprojectionError)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Finds an object pose \( {}^{c}\mathbf{T}_o \) from 3D-2D point correspondences using the RANSAC scheme to deal with bad matches.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static boolean</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#solvePnPRansac(org.opencv.core.MatOfPoint3f,org.opencv.core.MatOfPoint2f,org.opencv.core.Mat,org.opencv.core.MatOfDouble,org.opencv.core.Mat,org.opencv.core.Mat,boolean,int,float,double)" class="member-name-link">solvePnPRansac</a><wbr>(<a href="../core/MatOfPoint3f.html" title="class in org.opencv.core">MatOfPoint3f</a> objectPoints,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/MatOfDouble.html" title="class in org.opencv.core">MatOfDouble</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
boolean useExtrinsicGuess,
|
|
int iterationsCount,
|
|
float reprojectionError,
|
|
double confidence)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Finds an object pose \( {}^{c}\mathbf{T}_o \) from 3D-2D point correspondences using the RANSAC scheme to deal with bad matches.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static boolean</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#solvePnPRansac(org.opencv.core.MatOfPoint3f,org.opencv.core.MatOfPoint2f,org.opencv.core.Mat,org.opencv.core.MatOfDouble,org.opencv.core.Mat,org.opencv.core.Mat,boolean,int,float,double,org.opencv.core.Mat)" class="member-name-link">solvePnPRansac</a><wbr>(<a href="../core/MatOfPoint3f.html" title="class in org.opencv.core">MatOfPoint3f</a> objectPoints,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/MatOfDouble.html" title="class in org.opencv.core">MatOfDouble</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
boolean useExtrinsicGuess,
|
|
int iterationsCount,
|
|
float reprojectionError,
|
|
double confidence,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> inliers)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Finds an object pose \( {}^{c}\mathbf{T}_o \) from 3D-2D point correspondences using the RANSAC scheme to deal with bad matches.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static boolean</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#solvePnPRansac(org.opencv.core.MatOfPoint3f,org.opencv.core.MatOfPoint2f,org.opencv.core.Mat,org.opencv.core.MatOfDouble,org.opencv.core.Mat,org.opencv.core.Mat,boolean,int,float,double,org.opencv.core.Mat,int)" class="member-name-link">solvePnPRansac</a><wbr>(<a href="../core/MatOfPoint3f.html" title="class in org.opencv.core">MatOfPoint3f</a> objectPoints,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/MatOfDouble.html" title="class in org.opencv.core">MatOfDouble</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
boolean useExtrinsicGuess,
|
|
int iterationsCount,
|
|
float reprojectionError,
|
|
double confidence,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> inliers,
|
|
int flags)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Finds an object pose \( {}^{c}\mathbf{T}_o \) from 3D-2D point correspondences using the RANSAC scheme to deal with bad matches.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static boolean</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#solvePnPRansac(org.opencv.core.MatOfPoint3f,org.opencv.core.MatOfPoint2f,org.opencv.core.Mat,org.opencv.core.MatOfDouble,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">solvePnPRansac</a><wbr>(<a href="../core/MatOfPoint3f.html" title="class in org.opencv.core">MatOfPoint3f</a> objectPoints,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/MatOfDouble.html" title="class in org.opencv.core">MatOfDouble</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> inliers)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static boolean</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#solvePnPRansac(org.opencv.core.MatOfPoint3f,org.opencv.core.MatOfPoint2f,org.opencv.core.Mat,org.opencv.core.MatOfDouble,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.calib3d.UsacParams)" class="member-name-link">solvePnPRansac</a><wbr>(<a href="../core/MatOfPoint3f.html" title="class in org.opencv.core">MatOfPoint3f</a> objectPoints,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/MatOfDouble.html" title="class in org.opencv.core">MatOfDouble</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> inliers,
|
|
<a href="UsacParams.html" title="class in org.opencv.calib3d">UsacParams</a> params)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#solvePnPRefineLM(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">solvePnPRefineLM</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Refine a pose (the translation and the rotation that transform a 3D point expressed in the object coordinate frame
|
|
to the camera coordinate frame) from a 3D-2D point correspondences and starting from an initial solution.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#solvePnPRefineLM(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.TermCriteria)" class="member-name-link">solvePnPRefineLM</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
<a href="../core/TermCriteria.html" title="class in org.opencv.core">TermCriteria</a> criteria)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Refine a pose (the translation and the rotation that transform a 3D point expressed in the object coordinate frame
|
|
to the camera coordinate frame) from a 3D-2D point correspondences and starting from an initial solution.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#solvePnPRefineVVS(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">solvePnPRefineVVS</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Refine a pose (the translation and the rotation that transform a 3D point expressed in the object coordinate frame
|
|
to the camera coordinate frame) from a 3D-2D point correspondences and starting from an initial solution.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#solvePnPRefineVVS(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.TermCriteria)" class="member-name-link">solvePnPRefineVVS</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
<a href="../core/TermCriteria.html" title="class in org.opencv.core">TermCriteria</a> criteria)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Refine a pose (the translation and the rotation that transform a 3D point expressed in the object coordinate frame
|
|
to the camera coordinate frame) from a 3D-2D point correspondences and starting from an initial solution.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#solvePnPRefineVVS(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.TermCriteria,double)" class="member-name-link">solvePnPRefineVVS</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
<a href="../core/TermCriteria.html" title="class in org.opencv.core">TermCriteria</a> criteria,
|
|
double VVSlambda)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Refine a pose (the translation and the rotation that transform a 3D point expressed in the object coordinate frame
|
|
to the camera coordinate frame) from a 3D-2D point correspondences and starting from an initial solution.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static double</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#stereoCalibrate(java.util.List,java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">stereoCalibrate</a><wbr>(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints1,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs2,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> T,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> E,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> F)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static double</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#stereoCalibrate(java.util.List,java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int)" class="member-name-link">stereoCalibrate</a><wbr>(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints1,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs2,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> T,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> E,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> F,
|
|
int flags)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static double</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#stereoCalibrate(java.util.List,java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,org.opencv.core.TermCriteria)" class="member-name-link">stereoCalibrate</a><wbr>(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints1,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs2,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> T,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> E,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> F,
|
|
int flags,
|
|
<a href="../core/TermCriteria.html" title="class in org.opencv.core">TermCriteria</a> criteria)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static double</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#stereoCalibrate(java.util.List,java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">stereoCalibrate</a><wbr>(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints1,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs2,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> T,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> E,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> F,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> perViewErrors)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static double</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#stereoCalibrate(java.util.List,java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int)" class="member-name-link">stereoCalibrate</a><wbr>(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints1,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs2,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> T,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> E,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> F,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> perViewErrors,
|
|
int flags)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static double</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#stereoCalibrate(java.util.List,java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,org.opencv.core.TermCriteria)" class="member-name-link">stereoCalibrate</a><wbr>(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints1,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs2,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> T,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> E,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> F,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> perViewErrors,
|
|
int flags,
|
|
<a href="../core/TermCriteria.html" title="class in org.opencv.core">TermCriteria</a> criteria)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static double</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#stereoCalibrateExtended(java.util.List,java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List,org.opencv.core.Mat)" class="member-name-link">stereoCalibrateExtended</a><wbr>(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints1,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs2,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> T,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> E,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> F,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> perViewErrors)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Calibrates a stereo camera set up.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static double</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#stereoCalibrateExtended(java.util.List,java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List,org.opencv.core.Mat,int)" class="member-name-link">stereoCalibrateExtended</a><wbr>(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints1,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs2,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> T,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> E,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> F,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> perViewErrors,
|
|
int flags)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Calibrates a stereo camera set up.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static double</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#stereoCalibrateExtended(java.util.List,java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List,org.opencv.core.Mat,int,org.opencv.core.TermCriteria)" class="member-name-link">stereoCalibrateExtended</a><wbr>(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints1,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs2,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> T,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> E,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> F,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> perViewErrors,
|
|
int flags,
|
|
<a href="../core/TermCriteria.html" title="class in org.opencv.core">TermCriteria</a> criteria)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Calibrates a stereo camera set up.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#stereoRectify(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">stereoRectify</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs2,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> T,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> Q)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Computes rectification transforms for each head of a calibrated stereo camera.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#stereoRectify(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int)" class="member-name-link">stereoRectify</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs2,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> T,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> Q,
|
|
int flags)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Computes rectification transforms for each head of a calibrated stereo camera.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#stereoRectify(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,double)" class="member-name-link">stereoRectify</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs2,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> T,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> Q,
|
|
int flags,
|
|
double alpha)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Computes rectification transforms for each head of a calibrated stereo camera.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#stereoRectify(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,double,org.opencv.core.Size)" class="member-name-link">stereoRectify</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs2,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> T,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> Q,
|
|
int flags,
|
|
double alpha,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> newImageSize)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Computes rectification transforms for each head of a calibrated stereo camera.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#stereoRectify(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,double,org.opencv.core.Size,org.opencv.core.Rect)" class="member-name-link">stereoRectify</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs2,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> T,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> Q,
|
|
int flags,
|
|
double alpha,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> newImageSize,
|
|
<a href="../core/Rect.html" title="class in org.opencv.core">Rect</a> validPixROI1)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Computes rectification transforms for each head of a calibrated stereo camera.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#stereoRectify(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,double,org.opencv.core.Size,org.opencv.core.Rect,org.opencv.core.Rect)" class="member-name-link">stereoRectify</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs2,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> T,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> Q,
|
|
int flags,
|
|
double alpha,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> newImageSize,
|
|
<a href="../core/Rect.html" title="class in org.opencv.core">Rect</a> validPixROI1,
|
|
<a href="../core/Rect.html" title="class in org.opencv.core">Rect</a> validPixROI2)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Computes rectification transforms for each head of a calibrated stereo camera.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static boolean</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#stereoRectifyUncalibrated(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">stereoRectifyUncalibrated</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> F,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imgSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> H1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> H2)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Computes a rectification transform for an uncalibrated stereo camera.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static boolean</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#stereoRectifyUncalibrated(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,double)" class="member-name-link">stereoRectifyUncalibrated</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> F,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imgSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> H1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> H2,
|
|
double threshold)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Computes a rectification transform for an uncalibrated stereo camera.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#triangulatePoints(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">triangulatePoints</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> projMatr1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> projMatr2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> projPoints1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> projPoints2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points4D)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">This function reconstructs 3-dimensional points (in homogeneous coordinates) by using
|
|
their observations with a stereo camera.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#undistort(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">undistort</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> src,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dst,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Transforms an image to compensate for lens distortion.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#undistort(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">undistort</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> src,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dst,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> newCameraMatrix)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Transforms an image to compensate for lens distortion.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#undistortImagePoints(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">undistortImagePoints</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> src,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dst,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Compute undistorted image points position</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#undistortImagePoints(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.TermCriteria)" class="member-name-link">undistortImagePoints</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> src,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dst,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/TermCriteria.html" title="class in org.opencv.core">TermCriteria</a> arg1)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Compute undistorted image points position</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#undistortPoints(org.opencv.core.MatOfPoint2f,org.opencv.core.MatOfPoint2f,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">undistortPoints</a><wbr>(<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> src,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> dst,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Computes the ideal point coordinates from the observed point coordinates.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#undistortPoints(org.opencv.core.MatOfPoint2f,org.opencv.core.MatOfPoint2f,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">undistortPoints</a><wbr>(<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> src,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> dst,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Computes the ideal point coordinates from the observed point coordinates.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#undistortPoints(org.opencv.core.MatOfPoint2f,org.opencv.core.MatOfPoint2f,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)" class="member-name-link">undistortPoints</a><wbr>(<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> src,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> dst,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block">Computes the ideal point coordinates from the observed point coordinates.</div>
|
|
</div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#undistortPointsIter(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.TermCriteria)" class="member-name-link">undistortPointsIter</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> src,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dst,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P,
|
|
<a href="../core/TermCriteria.html" title="class in org.opencv.core">TermCriteria</a> criteria)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4">
|
|
<div class="block"><b>Note:</b> Default version of #undistortPoints does 5 iterations to compute undistorted points.</div>
|
|
</div>
|
|
<div class="col-first odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#validateDisparity(org.opencv.core.Mat,org.opencv.core.Mat,int,int)" class="member-name-link">validateDisparity</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> disparity,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cost,
|
|
int minDisparity,
|
|
int numberOfDisparities)</code></div>
|
|
<div class="col-last odd-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
<div class="col-first even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code>static void</code></div>
|
|
<div class="col-second even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"><code><a href="#validateDisparity(org.opencv.core.Mat,org.opencv.core.Mat,int,int,int)" class="member-name-link">validateDisparity</a><wbr>(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> disparity,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cost,
|
|
int minDisparity,
|
|
int numberOfDisparities,
|
|
int disp12MaxDisp)</code></div>
|
|
<div class="col-last even-row-color method-summary-table method-summary-table-tab1 method-summary-table-tab4"> </div>
|
|
</div>
|
|
</div>
|
|
</div>
|
|
<div class="inherited-list">
|
|
<h3 id="methods-inherited-from-class-java.lang.Object">Methods inherited from class java.lang.<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/lang/Object.html" title="class or interface in java.lang" class="external-link">Object</a></h3>
|
|
<code><a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/lang/Object.html#equals(java.lang.Object)" title="class or interface in java.lang" class="external-link">equals</a>, <a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/lang/Object.html#getClass()" title="class or interface in java.lang" class="external-link">getClass</a>, <a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/lang/Object.html#hashCode()" title="class or interface in java.lang" class="external-link">hashCode</a>, <a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/lang/Object.html#notify()" title="class or interface in java.lang" class="external-link">notify</a>, <a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/lang/Object.html#notifyAll()" title="class or interface in java.lang" class="external-link">notifyAll</a>, <a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/lang/Object.html#toString()" title="class or interface in java.lang" class="external-link">toString</a>, <a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/lang/Object.html#wait()" title="class or interface in java.lang" class="external-link">wait</a>, <a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/lang/Object.html#wait(long)" title="class or interface in java.lang" class="external-link">wait</a>, <a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/lang/Object.html#wait(long,int)" title="class or interface in java.lang" class="external-link">wait</a></code></div>
|
|
</section>
|
|
</li>
|
|
</ul>
|
|
</section>
|
|
<section class="details">
|
|
<ul class="details-list">
|
|
<!-- ============ FIELD DETAIL =========== -->
|
|
<li>
|
|
<section class="field-details" id="field-detail">
|
|
<h2>Field Details</h2>
|
|
<ul class="member-list">
|
|
<li>
|
|
<section class="detail" id="CV_ITERATIVE">
|
|
<h3>CV_ITERATIVE</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CV_ITERATIVE</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CV_ITERATIVE">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="CV_EPNP">
|
|
<h3>CV_EPNP</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CV_EPNP</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CV_EPNP">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="CV_P3P">
|
|
<h3>CV_P3P</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CV_P3P</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CV_P3P">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="CV_DLS">
|
|
<h3>CV_DLS</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CV_DLS</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CV_DLS">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="CvLevMarq_DONE">
|
|
<h3>CvLevMarq_DONE</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CvLevMarq_DONE</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CvLevMarq_DONE">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="CvLevMarq_STARTED">
|
|
<h3>CvLevMarq_STARTED</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CvLevMarq_STARTED</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CvLevMarq_STARTED">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="CvLevMarq_CALC_J">
|
|
<h3>CvLevMarq_CALC_J</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CvLevMarq_CALC_J</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CvLevMarq_CALC_J">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="CvLevMarq_CHECK_ERR">
|
|
<h3>CvLevMarq_CHECK_ERR</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CvLevMarq_CHECK_ERR</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CvLevMarq_CHECK_ERR">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="LMEDS">
|
|
<h3>LMEDS</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">LMEDS</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.LMEDS">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="RANSAC">
|
|
<h3>RANSAC</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">RANSAC</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.RANSAC">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="RHO">
|
|
<h3>RHO</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">RHO</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.RHO">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="USAC_DEFAULT">
|
|
<h3>USAC_DEFAULT</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">USAC_DEFAULT</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.USAC_DEFAULT">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="USAC_PARALLEL">
|
|
<h3>USAC_PARALLEL</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">USAC_PARALLEL</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.USAC_PARALLEL">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="USAC_FM_8PTS">
|
|
<h3>USAC_FM_8PTS</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">USAC_FM_8PTS</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.USAC_FM_8PTS">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="USAC_FAST">
|
|
<h3>USAC_FAST</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">USAC_FAST</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.USAC_FAST">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="USAC_ACCURATE">
|
|
<h3>USAC_ACCURATE</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">USAC_ACCURATE</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.USAC_ACCURATE">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="USAC_PROSAC">
|
|
<h3>USAC_PROSAC</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">USAC_PROSAC</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.USAC_PROSAC">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="USAC_MAGSAC">
|
|
<h3>USAC_MAGSAC</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">USAC_MAGSAC</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.USAC_MAGSAC">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="CALIB_CB_ADAPTIVE_THRESH">
|
|
<h3>CALIB_CB_ADAPTIVE_THRESH</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CALIB_CB_ADAPTIVE_THRESH</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CALIB_CB_ADAPTIVE_THRESH">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="CALIB_CB_NORMALIZE_IMAGE">
|
|
<h3>CALIB_CB_NORMALIZE_IMAGE</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CALIB_CB_NORMALIZE_IMAGE</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CALIB_CB_NORMALIZE_IMAGE">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="CALIB_CB_FILTER_QUADS">
|
|
<h3>CALIB_CB_FILTER_QUADS</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CALIB_CB_FILTER_QUADS</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CALIB_CB_FILTER_QUADS">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="CALIB_CB_FAST_CHECK">
|
|
<h3>CALIB_CB_FAST_CHECK</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CALIB_CB_FAST_CHECK</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CALIB_CB_FAST_CHECK">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="CALIB_CB_EXHAUSTIVE">
|
|
<h3>CALIB_CB_EXHAUSTIVE</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CALIB_CB_EXHAUSTIVE</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CALIB_CB_EXHAUSTIVE">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="CALIB_CB_ACCURACY">
|
|
<h3>CALIB_CB_ACCURACY</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CALIB_CB_ACCURACY</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CALIB_CB_ACCURACY">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="CALIB_CB_LARGER">
|
|
<h3>CALIB_CB_LARGER</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CALIB_CB_LARGER</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CALIB_CB_LARGER">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="CALIB_CB_MARKER">
|
|
<h3>CALIB_CB_MARKER</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CALIB_CB_MARKER</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CALIB_CB_MARKER">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="CALIB_CB_PLAIN">
|
|
<h3>CALIB_CB_PLAIN</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CALIB_CB_PLAIN</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CALIB_CB_PLAIN">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="CALIB_CB_SYMMETRIC_GRID">
|
|
<h3>CALIB_CB_SYMMETRIC_GRID</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CALIB_CB_SYMMETRIC_GRID</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CALIB_CB_SYMMETRIC_GRID">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="CALIB_CB_ASYMMETRIC_GRID">
|
|
<h3>CALIB_CB_ASYMMETRIC_GRID</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CALIB_CB_ASYMMETRIC_GRID</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CALIB_CB_ASYMMETRIC_GRID">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="CALIB_CB_CLUSTERING">
|
|
<h3>CALIB_CB_CLUSTERING</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CALIB_CB_CLUSTERING</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CALIB_CB_CLUSTERING">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="CALIB_NINTRINSIC">
|
|
<h3>CALIB_NINTRINSIC</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CALIB_NINTRINSIC</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CALIB_NINTRINSIC">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="CALIB_USE_INTRINSIC_GUESS">
|
|
<h3>CALIB_USE_INTRINSIC_GUESS</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CALIB_USE_INTRINSIC_GUESS</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CALIB_USE_INTRINSIC_GUESS">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="CALIB_FIX_ASPECT_RATIO">
|
|
<h3>CALIB_FIX_ASPECT_RATIO</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CALIB_FIX_ASPECT_RATIO</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CALIB_FIX_ASPECT_RATIO">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="CALIB_FIX_PRINCIPAL_POINT">
|
|
<h3>CALIB_FIX_PRINCIPAL_POINT</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CALIB_FIX_PRINCIPAL_POINT</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CALIB_FIX_PRINCIPAL_POINT">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="CALIB_ZERO_TANGENT_DIST">
|
|
<h3>CALIB_ZERO_TANGENT_DIST</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CALIB_ZERO_TANGENT_DIST</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CALIB_ZERO_TANGENT_DIST">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="CALIB_FIX_FOCAL_LENGTH">
|
|
<h3>CALIB_FIX_FOCAL_LENGTH</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CALIB_FIX_FOCAL_LENGTH</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CALIB_FIX_FOCAL_LENGTH">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="CALIB_FIX_K1">
|
|
<h3>CALIB_FIX_K1</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CALIB_FIX_K1</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CALIB_FIX_K1">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="CALIB_FIX_K2">
|
|
<h3>CALIB_FIX_K2</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CALIB_FIX_K2</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CALIB_FIX_K2">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="CALIB_FIX_K3">
|
|
<h3>CALIB_FIX_K3</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CALIB_FIX_K3</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CALIB_FIX_K3">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="CALIB_FIX_K4">
|
|
<h3>CALIB_FIX_K4</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CALIB_FIX_K4</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CALIB_FIX_K4">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="CALIB_FIX_K5">
|
|
<h3>CALIB_FIX_K5</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CALIB_FIX_K5</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CALIB_FIX_K5">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="CALIB_FIX_K6">
|
|
<h3>CALIB_FIX_K6</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CALIB_FIX_K6</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CALIB_FIX_K6">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="CALIB_RATIONAL_MODEL">
|
|
<h3>CALIB_RATIONAL_MODEL</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CALIB_RATIONAL_MODEL</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CALIB_RATIONAL_MODEL">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="CALIB_THIN_PRISM_MODEL">
|
|
<h3>CALIB_THIN_PRISM_MODEL</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CALIB_THIN_PRISM_MODEL</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CALIB_THIN_PRISM_MODEL">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="CALIB_FIX_S1_S2_S3_S4">
|
|
<h3>CALIB_FIX_S1_S2_S3_S4</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CALIB_FIX_S1_S2_S3_S4</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CALIB_FIX_S1_S2_S3_S4">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="CALIB_TILTED_MODEL">
|
|
<h3>CALIB_TILTED_MODEL</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CALIB_TILTED_MODEL</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CALIB_TILTED_MODEL">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="CALIB_FIX_TAUX_TAUY">
|
|
<h3>CALIB_FIX_TAUX_TAUY</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CALIB_FIX_TAUX_TAUY</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CALIB_FIX_TAUX_TAUY">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="CALIB_USE_QR">
|
|
<h3>CALIB_USE_QR</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CALIB_USE_QR</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CALIB_USE_QR">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="CALIB_FIX_TANGENT_DIST">
|
|
<h3>CALIB_FIX_TANGENT_DIST</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CALIB_FIX_TANGENT_DIST</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CALIB_FIX_TANGENT_DIST">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="CALIB_FIX_INTRINSIC">
|
|
<h3>CALIB_FIX_INTRINSIC</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CALIB_FIX_INTRINSIC</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CALIB_FIX_INTRINSIC">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="CALIB_SAME_FOCAL_LENGTH">
|
|
<h3>CALIB_SAME_FOCAL_LENGTH</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CALIB_SAME_FOCAL_LENGTH</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CALIB_SAME_FOCAL_LENGTH">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="CALIB_ZERO_DISPARITY">
|
|
<h3>CALIB_ZERO_DISPARITY</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CALIB_ZERO_DISPARITY</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CALIB_ZERO_DISPARITY">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="CALIB_USE_LU">
|
|
<h3>CALIB_USE_LU</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CALIB_USE_LU</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CALIB_USE_LU">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="CALIB_USE_EXTRINSIC_GUESS">
|
|
<h3>CALIB_USE_EXTRINSIC_GUESS</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CALIB_USE_EXTRINSIC_GUESS</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CALIB_USE_EXTRINSIC_GUESS">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="FM_7POINT">
|
|
<h3>FM_7POINT</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">FM_7POINT</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.FM_7POINT">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="FM_8POINT">
|
|
<h3>FM_8POINT</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">FM_8POINT</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.FM_8POINT">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="FM_LMEDS">
|
|
<h3>FM_LMEDS</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">FM_LMEDS</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.FM_LMEDS">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="FM_RANSAC">
|
|
<h3>FM_RANSAC</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">FM_RANSAC</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.FM_RANSAC">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_CALIB_USE_INTRINSIC_GUESS">
|
|
<h3>fisheye_CALIB_USE_INTRINSIC_GUESS</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">fisheye_CALIB_USE_INTRINSIC_GUESS</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.fisheye_CALIB_USE_INTRINSIC_GUESS">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_CALIB_RECOMPUTE_EXTRINSIC">
|
|
<h3>fisheye_CALIB_RECOMPUTE_EXTRINSIC</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">fisheye_CALIB_RECOMPUTE_EXTRINSIC</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.fisheye_CALIB_RECOMPUTE_EXTRINSIC">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_CALIB_CHECK_COND">
|
|
<h3>fisheye_CALIB_CHECK_COND</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">fisheye_CALIB_CHECK_COND</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.fisheye_CALIB_CHECK_COND">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_CALIB_FIX_SKEW">
|
|
<h3>fisheye_CALIB_FIX_SKEW</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">fisheye_CALIB_FIX_SKEW</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.fisheye_CALIB_FIX_SKEW">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_CALIB_FIX_K1">
|
|
<h3>fisheye_CALIB_FIX_K1</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">fisheye_CALIB_FIX_K1</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.fisheye_CALIB_FIX_K1">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_CALIB_FIX_K2">
|
|
<h3>fisheye_CALIB_FIX_K2</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">fisheye_CALIB_FIX_K2</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.fisheye_CALIB_FIX_K2">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_CALIB_FIX_K3">
|
|
<h3>fisheye_CALIB_FIX_K3</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">fisheye_CALIB_FIX_K3</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.fisheye_CALIB_FIX_K3">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_CALIB_FIX_K4">
|
|
<h3>fisheye_CALIB_FIX_K4</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">fisheye_CALIB_FIX_K4</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.fisheye_CALIB_FIX_K4">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_CALIB_FIX_INTRINSIC">
|
|
<h3>fisheye_CALIB_FIX_INTRINSIC</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">fisheye_CALIB_FIX_INTRINSIC</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.fisheye_CALIB_FIX_INTRINSIC">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_CALIB_FIX_PRINCIPAL_POINT">
|
|
<h3>fisheye_CALIB_FIX_PRINCIPAL_POINT</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">fisheye_CALIB_FIX_PRINCIPAL_POINT</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.fisheye_CALIB_FIX_PRINCIPAL_POINT">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_CALIB_ZERO_DISPARITY">
|
|
<h3>fisheye_CALIB_ZERO_DISPARITY</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">fisheye_CALIB_ZERO_DISPARITY</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.fisheye_CALIB_ZERO_DISPARITY">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_CALIB_FIX_FOCAL_LENGTH">
|
|
<h3>fisheye_CALIB_FIX_FOCAL_LENGTH</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">fisheye_CALIB_FIX_FOCAL_LENGTH</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.fisheye_CALIB_FIX_FOCAL_LENGTH">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="CirclesGridFinderParameters_SYMMETRIC_GRID">
|
|
<h3>CirclesGridFinderParameters_SYMMETRIC_GRID</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CirclesGridFinderParameters_SYMMETRIC_GRID</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CirclesGridFinderParameters_SYMMETRIC_GRID">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="CirclesGridFinderParameters_ASYMMETRIC_GRID">
|
|
<h3>CirclesGridFinderParameters_ASYMMETRIC_GRID</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CirclesGridFinderParameters_ASYMMETRIC_GRID</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CirclesGridFinderParameters_ASYMMETRIC_GRID">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="CALIB_HAND_EYE_TSAI">
|
|
<h3>CALIB_HAND_EYE_TSAI</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CALIB_HAND_EYE_TSAI</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CALIB_HAND_EYE_TSAI">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="CALIB_HAND_EYE_PARK">
|
|
<h3>CALIB_HAND_EYE_PARK</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CALIB_HAND_EYE_PARK</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CALIB_HAND_EYE_PARK">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="CALIB_HAND_EYE_HORAUD">
|
|
<h3>CALIB_HAND_EYE_HORAUD</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CALIB_HAND_EYE_HORAUD</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CALIB_HAND_EYE_HORAUD">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="CALIB_HAND_EYE_ANDREFF">
|
|
<h3>CALIB_HAND_EYE_ANDREFF</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CALIB_HAND_EYE_ANDREFF</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CALIB_HAND_EYE_ANDREFF">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="CALIB_HAND_EYE_DANIILIDIS">
|
|
<h3>CALIB_HAND_EYE_DANIILIDIS</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CALIB_HAND_EYE_DANIILIDIS</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CALIB_HAND_EYE_DANIILIDIS">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="LOCAL_OPTIM_NULL">
|
|
<h3>LOCAL_OPTIM_NULL</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">LOCAL_OPTIM_NULL</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.LOCAL_OPTIM_NULL">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="LOCAL_OPTIM_INNER_LO">
|
|
<h3>LOCAL_OPTIM_INNER_LO</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">LOCAL_OPTIM_INNER_LO</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.LOCAL_OPTIM_INNER_LO">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="LOCAL_OPTIM_INNER_AND_ITER_LO">
|
|
<h3>LOCAL_OPTIM_INNER_AND_ITER_LO</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">LOCAL_OPTIM_INNER_AND_ITER_LO</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.LOCAL_OPTIM_INNER_AND_ITER_LO">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="LOCAL_OPTIM_GC">
|
|
<h3>LOCAL_OPTIM_GC</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">LOCAL_OPTIM_GC</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.LOCAL_OPTIM_GC">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="LOCAL_OPTIM_SIGMA">
|
|
<h3>LOCAL_OPTIM_SIGMA</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">LOCAL_OPTIM_SIGMA</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.LOCAL_OPTIM_SIGMA">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="NEIGH_FLANN_KNN">
|
|
<h3>NEIGH_FLANN_KNN</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">NEIGH_FLANN_KNN</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.NEIGH_FLANN_KNN">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="NEIGH_GRID">
|
|
<h3>NEIGH_GRID</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">NEIGH_GRID</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.NEIGH_GRID">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="NEIGH_FLANN_RADIUS">
|
|
<h3>NEIGH_FLANN_RADIUS</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">NEIGH_FLANN_RADIUS</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.NEIGH_FLANN_RADIUS">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="NONE_POLISHER">
|
|
<h3>NONE_POLISHER</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">NONE_POLISHER</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.NONE_POLISHER">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="LSQ_POLISHER">
|
|
<h3>LSQ_POLISHER</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">LSQ_POLISHER</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.LSQ_POLISHER">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="MAGSAC">
|
|
<h3>MAGSAC</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">MAGSAC</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.MAGSAC">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="COV_POLISHER">
|
|
<h3>COV_POLISHER</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">COV_POLISHER</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.COV_POLISHER">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="CALIB_ROBOT_WORLD_HAND_EYE_SHAH">
|
|
<h3>CALIB_ROBOT_WORLD_HAND_EYE_SHAH</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CALIB_ROBOT_WORLD_HAND_EYE_SHAH</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CALIB_ROBOT_WORLD_HAND_EYE_SHAH">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="CALIB_ROBOT_WORLD_HAND_EYE_LI">
|
|
<h3>CALIB_ROBOT_WORLD_HAND_EYE_LI</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">CALIB_ROBOT_WORLD_HAND_EYE_LI</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.CALIB_ROBOT_WORLD_HAND_EYE_LI">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="SAMPLING_UNIFORM">
|
|
<h3>SAMPLING_UNIFORM</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">SAMPLING_UNIFORM</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.SAMPLING_UNIFORM">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="SAMPLING_PROGRESSIVE_NAPSAC">
|
|
<h3>SAMPLING_PROGRESSIVE_NAPSAC</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">SAMPLING_PROGRESSIVE_NAPSAC</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.SAMPLING_PROGRESSIVE_NAPSAC">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="SAMPLING_NAPSAC">
|
|
<h3>SAMPLING_NAPSAC</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">SAMPLING_NAPSAC</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.SAMPLING_NAPSAC">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="SAMPLING_PROSAC">
|
|
<h3>SAMPLING_PROSAC</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">SAMPLING_PROSAC</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.SAMPLING_PROSAC">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="SCORE_METHOD_RANSAC">
|
|
<h3>SCORE_METHOD_RANSAC</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">SCORE_METHOD_RANSAC</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.SCORE_METHOD_RANSAC">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="SCORE_METHOD_MSAC">
|
|
<h3>SCORE_METHOD_MSAC</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">SCORE_METHOD_MSAC</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.SCORE_METHOD_MSAC">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="SCORE_METHOD_MAGSAC">
|
|
<h3>SCORE_METHOD_MAGSAC</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">SCORE_METHOD_MAGSAC</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.SCORE_METHOD_MAGSAC">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="SCORE_METHOD_LMEDS">
|
|
<h3>SCORE_METHOD_LMEDS</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">SCORE_METHOD_LMEDS</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.SCORE_METHOD_LMEDS">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="SOLVEPNP_ITERATIVE">
|
|
<h3>SOLVEPNP_ITERATIVE</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">SOLVEPNP_ITERATIVE</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.SOLVEPNP_ITERATIVE">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="SOLVEPNP_EPNP">
|
|
<h3>SOLVEPNP_EPNP</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">SOLVEPNP_EPNP</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.SOLVEPNP_EPNP">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="SOLVEPNP_P3P">
|
|
<h3>SOLVEPNP_P3P</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">SOLVEPNP_P3P</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.SOLVEPNP_P3P">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="SOLVEPNP_DLS">
|
|
<h3>SOLVEPNP_DLS</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">SOLVEPNP_DLS</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.SOLVEPNP_DLS">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="SOLVEPNP_UPNP">
|
|
<h3>SOLVEPNP_UPNP</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">SOLVEPNP_UPNP</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.SOLVEPNP_UPNP">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="SOLVEPNP_AP3P">
|
|
<h3>SOLVEPNP_AP3P</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">SOLVEPNP_AP3P</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.SOLVEPNP_AP3P">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="SOLVEPNP_IPPE">
|
|
<h3>SOLVEPNP_IPPE</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">SOLVEPNP_IPPE</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.SOLVEPNP_IPPE">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="SOLVEPNP_IPPE_SQUARE">
|
|
<h3>SOLVEPNP_IPPE_SQUARE</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">SOLVEPNP_IPPE_SQUARE</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.SOLVEPNP_IPPE_SQUARE">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="SOLVEPNP_SQPNP">
|
|
<h3>SOLVEPNP_SQPNP</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">SOLVEPNP_SQPNP</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.SOLVEPNP_SQPNP">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="SOLVEPNP_MAX_COUNT">
|
|
<h3>SOLVEPNP_MAX_COUNT</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">SOLVEPNP_MAX_COUNT</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.SOLVEPNP_MAX_COUNT">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="PROJ_SPHERICAL_ORTHO">
|
|
<h3>PROJ_SPHERICAL_ORTHO</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">PROJ_SPHERICAL_ORTHO</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.PROJ_SPHERICAL_ORTHO">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="PROJ_SPHERICAL_EQRECT">
|
|
<h3>PROJ_SPHERICAL_EQRECT</h3>
|
|
<div class="member-signature"><span class="modifiers">public static final</span> <span class="return-type">int</span> <span class="element-name">PROJ_SPHERICAL_EQRECT</span></div>
|
|
<dl class="notes">
|
|
<dt>See Also:</dt>
|
|
<dd>
|
|
<ul class="see-list">
|
|
<li><a href="../../../constant-values.html#org.opencv.calib3d.Calib3d.PROJ_SPHERICAL_EQRECT">Constant Field Values</a></li>
|
|
</ul>
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
</ul>
|
|
</section>
|
|
</li>
|
|
<!-- ========= CONSTRUCTOR DETAIL ======== -->
|
|
<li>
|
|
<section class="constructor-details" id="constructor-detail">
|
|
<h2>Constructor Details</h2>
|
|
<ul class="member-list">
|
|
<li>
|
|
<section class="detail" id="<init>()">
|
|
<h3>Calib3d</h3>
|
|
<div class="member-signature"><span class="modifiers">public</span> <span class="element-name">Calib3d</span>()</div>
|
|
</section>
|
|
</li>
|
|
</ul>
|
|
</section>
|
|
</li>
|
|
<!-- ============ METHOD DETAIL ========== -->
|
|
<li>
|
|
<section class="method-details" id="method-detail">
|
|
<h2>Method Details</h2>
|
|
<ul class="member-list">
|
|
<li>
|
|
<section class="detail" id="Rodrigues(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>Rodrigues</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">Rodrigues</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> src,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dst,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> jacobian)</span></div>
|
|
<div class="block">Converts a rotation matrix to a rotation vector or vice versa.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>src</code> - Input rotation vector (3x1 or 1x3) or rotation matrix (3x3).</dd>
|
|
<dd><code>dst</code> - Output rotation matrix (3x3) or rotation vector (3x1 or 1x3), respectively.</dd>
|
|
<dd><code>jacobian</code> - Optional output Jacobian matrix, 3x9 or 9x3, which is a matrix of partial
|
|
derivatives of the output array components with respect to the input array components.
|
|
|
|
\(\begin{array}{l} \theta \leftarrow norm(r) \\ r \leftarrow r/ \theta \\ R = \cos(\theta) I + (1- \cos{\theta} ) r r^T + \sin(\theta) \vecthreethree{0}{-r_z}{r_y}{r_z}{0}{-r_x}{-r_y}{r_x}{0} \end{array}\)
|
|
|
|
Inverse transformation can be also done easily, since
|
|
|
|
\(\sin ( \theta ) \vecthreethree{0}{-r_z}{r_y}{r_z}{0}{-r_x}{-r_y}{r_x}{0} = \frac{R - R^T}{2}\)
|
|
|
|
A rotation vector is a convenient and most compact representation of a rotation matrix (since any
|
|
rotation matrix has just 3 degrees of freedom). The representation is used in the global 3D geometry
|
|
optimization procedures like REF: calibrateCamera, REF: stereoCalibrate, or REF: solvePnP .
|
|
|
|
<b>Note:</b> More information about the computation of the derivative of a 3D rotation matrix with respect to its exponential coordinate
|
|
can be found in:
|
|
<ul>
|
|
<li>
|
|
A Compact Formula for the Derivative of a 3-D Rotation in Exponential Coordinates, Guillermo Gallego, Anthony J. Yezzi CITE: Gallego2014ACF
|
|
</li>
|
|
</ul>
|
|
|
|
<b>Note:</b> Useful information on SE(3) and Lie Groups can be found in:
|
|
<ul>
|
|
<li>
|
|
A tutorial on SE(3) transformation parameterizations and on-manifold optimization, Jose-Luis Blanco CITE: blanco2010tutorial
|
|
</li>
|
|
<li>
|
|
Lie Groups for 2D and 3D Transformation, Ethan Eade CITE: Eade17
|
|
</li>
|
|
<li>
|
|
A micro Lie theory for state estimation in robotics, Joan Solà, Jérémie Deray, Dinesh Atchuthan CITE: Sol2018AML
|
|
</li>
|
|
</ul></dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="Rodrigues(org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>Rodrigues</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">Rodrigues</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> src,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dst)</span></div>
|
|
<div class="block">Converts a rotation matrix to a rotation vector or vice versa.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>src</code> - Input rotation vector (3x1 or 1x3) or rotation matrix (3x3).</dd>
|
|
<dd><code>dst</code> - Output rotation matrix (3x3) or rotation vector (3x1 or 1x3), respectively.
|
|
derivatives of the output array components with respect to the input array components.
|
|
|
|
\(\begin{array}{l} \theta \leftarrow norm(r) \\ r \leftarrow r/ \theta \\ R = \cos(\theta) I + (1- \cos{\theta} ) r r^T + \sin(\theta) \vecthreethree{0}{-r_z}{r_y}{r_z}{0}{-r_x}{-r_y}{r_x}{0} \end{array}\)
|
|
|
|
Inverse transformation can be also done easily, since
|
|
|
|
\(\sin ( \theta ) \vecthreethree{0}{-r_z}{r_y}{r_z}{0}{-r_x}{-r_y}{r_x}{0} = \frac{R - R^T}{2}\)
|
|
|
|
A rotation vector is a convenient and most compact representation of a rotation matrix (since any
|
|
rotation matrix has just 3 degrees of freedom). The representation is used in the global 3D geometry
|
|
optimization procedures like REF: calibrateCamera, REF: stereoCalibrate, or REF: solvePnP .
|
|
|
|
<b>Note:</b> More information about the computation of the derivative of a 3D rotation matrix with respect to its exponential coordinate
|
|
can be found in:
|
|
<ul>
|
|
<li>
|
|
A Compact Formula for the Derivative of a 3-D Rotation in Exponential Coordinates, Guillermo Gallego, Anthony J. Yezzi CITE: Gallego2014ACF
|
|
</li>
|
|
</ul>
|
|
|
|
<b>Note:</b> Useful information on SE(3) and Lie Groups can be found in:
|
|
<ul>
|
|
<li>
|
|
A tutorial on SE(3) transformation parameterizations and on-manifold optimization, Jose-Luis Blanco CITE: blanco2010tutorial
|
|
</li>
|
|
<li>
|
|
Lie Groups for 2D and 3D Transformation, Ethan Eade CITE: Eade17
|
|
</li>
|
|
<li>
|
|
A micro Lie theory for state estimation in robotics, Joan Solà, Jérémie Deray, Dinesh Atchuthan CITE: Sol2018AML
|
|
</li>
|
|
</ul></dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="findHomography(org.opencv.core.MatOfPoint2f,org.opencv.core.MatOfPoint2f,int,double,org.opencv.core.Mat,int,double)">
|
|
<h3>findHomography</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">findHomography</span><wbr><span class="parameters">(<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> srcPoints,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> dstPoints,
|
|
int method,
|
|
double ransacReprojThreshold,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> mask,
|
|
int maxIters,
|
|
double confidence)</span></div>
|
|
<div class="block">Finds a perspective transformation between two planes.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>srcPoints</code> - Coordinates of the points in the original plane, a matrix of the type CV_32FC2
|
|
or vector<Point2f> .</dd>
|
|
<dd><code>dstPoints</code> - Coordinates of the points in the target plane, a matrix of the type CV_32FC2 or
|
|
a vector<Point2f> .</dd>
|
|
<dd><code>method</code> - Method used to compute a homography matrix. The following methods are possible:
|
|
<ul>
|
|
<li>
|
|
<b>0</b> - a regular method using all the points, i.e., the least squares method
|
|
</li>
|
|
<li>
|
|
REF: RANSAC - RANSAC-based robust method
|
|
</li>
|
|
<li>
|
|
REF: LMEDS - Least-Median robust method
|
|
</li>
|
|
<li>
|
|
REF: RHO - PROSAC-based robust method
|
|
</li>
|
|
</ul></dd>
|
|
<dd><code>ransacReprojThreshold</code> - Maximum allowed reprojection error to treat a point pair as an inlier
|
|
(used in the RANSAC and RHO methods only). That is, if
|
|
\(\| \texttt{dstPoints} _i - \texttt{convertPointsHomogeneous} ( \texttt{H} \cdot \texttt{srcPoints} _i) \|_2 > \texttt{ransacReprojThreshold}\)
|
|
then the point \(i\) is considered as an outlier. If srcPoints and dstPoints are measured in pixels,
|
|
it usually makes sense to set this parameter somewhere in the range of 1 to 10.</dd>
|
|
<dd><code>mask</code> - Optional output mask set by a robust method ( RANSAC or LMeDS ). Note that the input
|
|
mask values are ignored.</dd>
|
|
<dd><code>maxIters</code> - The maximum number of RANSAC iterations.</dd>
|
|
<dd><code>confidence</code> - Confidence level, between 0 and 1.
|
|
|
|
The function finds and returns the perspective transformation \(H\) between the source and the
|
|
destination planes:
|
|
|
|
\(s_i \vecthree{x'_i}{y'_i}{1} \sim H \vecthree{x_i}{y_i}{1}\)
|
|
|
|
so that the back-projection error
|
|
|
|
\(\sum _i \left ( x'_i- \frac{h_{11} x_i + h_{12} y_i + h_{13}}{h_{31} x_i + h_{32} y_i + h_{33}} \right )^2+ \left ( y'_i- \frac{h_{21} x_i + h_{22} y_i + h_{23}}{h_{31} x_i + h_{32} y_i + h_{33}} \right )^2\)
|
|
|
|
is minimized. If the parameter method is set to the default value 0, the function uses all the point
|
|
pairs to compute an initial homography estimate with a simple least-squares scheme.
|
|
|
|
However, if not all of the point pairs ( \(srcPoints_i\), \(dstPoints_i\) ) fit the rigid perspective
|
|
transformation (that is, there are some outliers), this initial estimate will be poor. In this case,
|
|
you can use one of the three robust methods. The methods RANSAC, LMeDS and RHO try many different
|
|
random subsets of the corresponding point pairs (of four pairs each, collinear pairs are discarded), estimate the homography matrix
|
|
using this subset and a simple least-squares algorithm, and then compute the quality/goodness of the
|
|
computed homography (which is the number of inliers for RANSAC or the least median re-projection error for
|
|
LMeDS). The best subset is then used to produce the initial estimate of the homography matrix and
|
|
the mask of inliers/outliers.
|
|
|
|
Regardless of the method, robust or not, the computed homography matrix is refined further (using
|
|
inliers only in case of a robust method) with the Levenberg-Marquardt method to reduce the
|
|
re-projection error even more.
|
|
|
|
The methods RANSAC and RHO can handle practically any ratio of outliers but need a threshold to
|
|
distinguish inliers from outliers. The method LMeDS does not need any threshold but it works
|
|
correctly only when there are more than 50% of inliers. Finally, if there are no outliers and the
|
|
noise is rather small, use the default method (method=0).
|
|
|
|
The function is used to find initial intrinsic and extrinsic matrices. Homography matrix is
|
|
determined up to a scale. If \(h_{33}\) is non-zero, the matrix is normalized so that \(h_{33}=1\).
|
|
<b>Note:</b> Whenever an \(H\) matrix cannot be estimated, an empty one will be returned.
|
|
|
|
SEE:
|
|
getAffineTransform, estimateAffine2D, estimateAffinePartial2D, getPerspectiveTransform, warpPerspective,
|
|
perspectiveTransform</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="findHomography(org.opencv.core.MatOfPoint2f,org.opencv.core.MatOfPoint2f,int,double,org.opencv.core.Mat,int)">
|
|
<h3>findHomography</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">findHomography</span><wbr><span class="parameters">(<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> srcPoints,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> dstPoints,
|
|
int method,
|
|
double ransacReprojThreshold,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> mask,
|
|
int maxIters)</span></div>
|
|
<div class="block">Finds a perspective transformation between two planes.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>srcPoints</code> - Coordinates of the points in the original plane, a matrix of the type CV_32FC2
|
|
or vector<Point2f> .</dd>
|
|
<dd><code>dstPoints</code> - Coordinates of the points in the target plane, a matrix of the type CV_32FC2 or
|
|
a vector<Point2f> .</dd>
|
|
<dd><code>method</code> - Method used to compute a homography matrix. The following methods are possible:
|
|
<ul>
|
|
<li>
|
|
<b>0</b> - a regular method using all the points, i.e., the least squares method
|
|
</li>
|
|
<li>
|
|
REF: RANSAC - RANSAC-based robust method
|
|
</li>
|
|
<li>
|
|
REF: LMEDS - Least-Median robust method
|
|
</li>
|
|
<li>
|
|
REF: RHO - PROSAC-based robust method
|
|
</li>
|
|
</ul></dd>
|
|
<dd><code>ransacReprojThreshold</code> - Maximum allowed reprojection error to treat a point pair as an inlier
|
|
(used in the RANSAC and RHO methods only). That is, if
|
|
\(\| \texttt{dstPoints} _i - \texttt{convertPointsHomogeneous} ( \texttt{H} \cdot \texttt{srcPoints} _i) \|_2 > \texttt{ransacReprojThreshold}\)
|
|
then the point \(i\) is considered as an outlier. If srcPoints and dstPoints are measured in pixels,
|
|
it usually makes sense to set this parameter somewhere in the range of 1 to 10.</dd>
|
|
<dd><code>mask</code> - Optional output mask set by a robust method ( RANSAC or LMeDS ). Note that the input
|
|
mask values are ignored.</dd>
|
|
<dd><code>maxIters</code> - The maximum number of RANSAC iterations.
|
|
|
|
The function finds and returns the perspective transformation \(H\) between the source and the
|
|
destination planes:
|
|
|
|
\(s_i \vecthree{x'_i}{y'_i}{1} \sim H \vecthree{x_i}{y_i}{1}\)
|
|
|
|
so that the back-projection error
|
|
|
|
\(\sum _i \left ( x'_i- \frac{h_{11} x_i + h_{12} y_i + h_{13}}{h_{31} x_i + h_{32} y_i + h_{33}} \right )^2+ \left ( y'_i- \frac{h_{21} x_i + h_{22} y_i + h_{23}}{h_{31} x_i + h_{32} y_i + h_{33}} \right )^2\)
|
|
|
|
is minimized. If the parameter method is set to the default value 0, the function uses all the point
|
|
pairs to compute an initial homography estimate with a simple least-squares scheme.
|
|
|
|
However, if not all of the point pairs ( \(srcPoints_i\), \(dstPoints_i\) ) fit the rigid perspective
|
|
transformation (that is, there are some outliers), this initial estimate will be poor. In this case,
|
|
you can use one of the three robust methods. The methods RANSAC, LMeDS and RHO try many different
|
|
random subsets of the corresponding point pairs (of four pairs each, collinear pairs are discarded), estimate the homography matrix
|
|
using this subset and a simple least-squares algorithm, and then compute the quality/goodness of the
|
|
computed homography (which is the number of inliers for RANSAC or the least median re-projection error for
|
|
LMeDS). The best subset is then used to produce the initial estimate of the homography matrix and
|
|
the mask of inliers/outliers.
|
|
|
|
Regardless of the method, robust or not, the computed homography matrix is refined further (using
|
|
inliers only in case of a robust method) with the Levenberg-Marquardt method to reduce the
|
|
re-projection error even more.
|
|
|
|
The methods RANSAC and RHO can handle practically any ratio of outliers but need a threshold to
|
|
distinguish inliers from outliers. The method LMeDS does not need any threshold but it works
|
|
correctly only when there are more than 50% of inliers. Finally, if there are no outliers and the
|
|
noise is rather small, use the default method (method=0).
|
|
|
|
The function is used to find initial intrinsic and extrinsic matrices. Homography matrix is
|
|
determined up to a scale. If \(h_{33}\) is non-zero, the matrix is normalized so that \(h_{33}=1\).
|
|
<b>Note:</b> Whenever an \(H\) matrix cannot be estimated, an empty one will be returned.
|
|
|
|
SEE:
|
|
getAffineTransform, estimateAffine2D, estimateAffinePartial2D, getPerspectiveTransform, warpPerspective,
|
|
perspectiveTransform</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="findHomography(org.opencv.core.MatOfPoint2f,org.opencv.core.MatOfPoint2f,int,double,org.opencv.core.Mat)">
|
|
<h3>findHomography</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">findHomography</span><wbr><span class="parameters">(<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> srcPoints,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> dstPoints,
|
|
int method,
|
|
double ransacReprojThreshold,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> mask)</span></div>
|
|
<div class="block">Finds a perspective transformation between two planes.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>srcPoints</code> - Coordinates of the points in the original plane, a matrix of the type CV_32FC2
|
|
or vector<Point2f> .</dd>
|
|
<dd><code>dstPoints</code> - Coordinates of the points in the target plane, a matrix of the type CV_32FC2 or
|
|
a vector<Point2f> .</dd>
|
|
<dd><code>method</code> - Method used to compute a homography matrix. The following methods are possible:
|
|
<ul>
|
|
<li>
|
|
<b>0</b> - a regular method using all the points, i.e., the least squares method
|
|
</li>
|
|
<li>
|
|
REF: RANSAC - RANSAC-based robust method
|
|
</li>
|
|
<li>
|
|
REF: LMEDS - Least-Median robust method
|
|
</li>
|
|
<li>
|
|
REF: RHO - PROSAC-based robust method
|
|
</li>
|
|
</ul></dd>
|
|
<dd><code>ransacReprojThreshold</code> - Maximum allowed reprojection error to treat a point pair as an inlier
|
|
(used in the RANSAC and RHO methods only). That is, if
|
|
\(\| \texttt{dstPoints} _i - \texttt{convertPointsHomogeneous} ( \texttt{H} \cdot \texttt{srcPoints} _i) \|_2 > \texttt{ransacReprojThreshold}\)
|
|
then the point \(i\) is considered as an outlier. If srcPoints and dstPoints are measured in pixels,
|
|
it usually makes sense to set this parameter somewhere in the range of 1 to 10.</dd>
|
|
<dd><code>mask</code> - Optional output mask set by a robust method ( RANSAC or LMeDS ). Note that the input
|
|
mask values are ignored.
|
|
|
|
The function finds and returns the perspective transformation \(H\) between the source and the
|
|
destination planes:
|
|
|
|
\(s_i \vecthree{x'_i}{y'_i}{1} \sim H \vecthree{x_i}{y_i}{1}\)
|
|
|
|
so that the back-projection error
|
|
|
|
\(\sum _i \left ( x'_i- \frac{h_{11} x_i + h_{12} y_i + h_{13}}{h_{31} x_i + h_{32} y_i + h_{33}} \right )^2+ \left ( y'_i- \frac{h_{21} x_i + h_{22} y_i + h_{23}}{h_{31} x_i + h_{32} y_i + h_{33}} \right )^2\)
|
|
|
|
is minimized. If the parameter method is set to the default value 0, the function uses all the point
|
|
pairs to compute an initial homography estimate with a simple least-squares scheme.
|
|
|
|
However, if not all of the point pairs ( \(srcPoints_i\), \(dstPoints_i\) ) fit the rigid perspective
|
|
transformation (that is, there are some outliers), this initial estimate will be poor. In this case,
|
|
you can use one of the three robust methods. The methods RANSAC, LMeDS and RHO try many different
|
|
random subsets of the corresponding point pairs (of four pairs each, collinear pairs are discarded), estimate the homography matrix
|
|
using this subset and a simple least-squares algorithm, and then compute the quality/goodness of the
|
|
computed homography (which is the number of inliers for RANSAC or the least median re-projection error for
|
|
LMeDS). The best subset is then used to produce the initial estimate of the homography matrix and
|
|
the mask of inliers/outliers.
|
|
|
|
Regardless of the method, robust or not, the computed homography matrix is refined further (using
|
|
inliers only in case of a robust method) with the Levenberg-Marquardt method to reduce the
|
|
re-projection error even more.
|
|
|
|
The methods RANSAC and RHO can handle practically any ratio of outliers but need a threshold to
|
|
distinguish inliers from outliers. The method LMeDS does not need any threshold but it works
|
|
correctly only when there are more than 50% of inliers. Finally, if there are no outliers and the
|
|
noise is rather small, use the default method (method=0).
|
|
|
|
The function is used to find initial intrinsic and extrinsic matrices. Homography matrix is
|
|
determined up to a scale. If \(h_{33}\) is non-zero, the matrix is normalized so that \(h_{33}=1\).
|
|
<b>Note:</b> Whenever an \(H\) matrix cannot be estimated, an empty one will be returned.
|
|
|
|
SEE:
|
|
getAffineTransform, estimateAffine2D, estimateAffinePartial2D, getPerspectiveTransform, warpPerspective,
|
|
perspectiveTransform</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="findHomography(org.opencv.core.MatOfPoint2f,org.opencv.core.MatOfPoint2f,int,double)">
|
|
<h3>findHomography</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">findHomography</span><wbr><span class="parameters">(<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> srcPoints,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> dstPoints,
|
|
int method,
|
|
double ransacReprojThreshold)</span></div>
|
|
<div class="block">Finds a perspective transformation between two planes.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>srcPoints</code> - Coordinates of the points in the original plane, a matrix of the type CV_32FC2
|
|
or vector<Point2f> .</dd>
|
|
<dd><code>dstPoints</code> - Coordinates of the points in the target plane, a matrix of the type CV_32FC2 or
|
|
a vector<Point2f> .</dd>
|
|
<dd><code>method</code> - Method used to compute a homography matrix. The following methods are possible:
|
|
<ul>
|
|
<li>
|
|
<b>0</b> - a regular method using all the points, i.e., the least squares method
|
|
</li>
|
|
<li>
|
|
REF: RANSAC - RANSAC-based robust method
|
|
</li>
|
|
<li>
|
|
REF: LMEDS - Least-Median robust method
|
|
</li>
|
|
<li>
|
|
REF: RHO - PROSAC-based robust method
|
|
</li>
|
|
</ul></dd>
|
|
<dd><code>ransacReprojThreshold</code> - Maximum allowed reprojection error to treat a point pair as an inlier
|
|
(used in the RANSAC and RHO methods only). That is, if
|
|
\(\| \texttt{dstPoints} _i - \texttt{convertPointsHomogeneous} ( \texttt{H} \cdot \texttt{srcPoints} _i) \|_2 > \texttt{ransacReprojThreshold}\)
|
|
then the point \(i\) is considered as an outlier. If srcPoints and dstPoints are measured in pixels,
|
|
it usually makes sense to set this parameter somewhere in the range of 1 to 10.
|
|
mask values are ignored.
|
|
|
|
The function finds and returns the perspective transformation \(H\) between the source and the
|
|
destination planes:
|
|
|
|
\(s_i \vecthree{x'_i}{y'_i}{1} \sim H \vecthree{x_i}{y_i}{1}\)
|
|
|
|
so that the back-projection error
|
|
|
|
\(\sum _i \left ( x'_i- \frac{h_{11} x_i + h_{12} y_i + h_{13}}{h_{31} x_i + h_{32} y_i + h_{33}} \right )^2+ \left ( y'_i- \frac{h_{21} x_i + h_{22} y_i + h_{23}}{h_{31} x_i + h_{32} y_i + h_{33}} \right )^2\)
|
|
|
|
is minimized. If the parameter method is set to the default value 0, the function uses all the point
|
|
pairs to compute an initial homography estimate with a simple least-squares scheme.
|
|
|
|
However, if not all of the point pairs ( \(srcPoints_i\), \(dstPoints_i\) ) fit the rigid perspective
|
|
transformation (that is, there are some outliers), this initial estimate will be poor. In this case,
|
|
you can use one of the three robust methods. The methods RANSAC, LMeDS and RHO try many different
|
|
random subsets of the corresponding point pairs (of four pairs each, collinear pairs are discarded), estimate the homography matrix
|
|
using this subset and a simple least-squares algorithm, and then compute the quality/goodness of the
|
|
computed homography (which is the number of inliers for RANSAC or the least median re-projection error for
|
|
LMeDS). The best subset is then used to produce the initial estimate of the homography matrix and
|
|
the mask of inliers/outliers.
|
|
|
|
Regardless of the method, robust or not, the computed homography matrix is refined further (using
|
|
inliers only in case of a robust method) with the Levenberg-Marquardt method to reduce the
|
|
re-projection error even more.
|
|
|
|
The methods RANSAC and RHO can handle practically any ratio of outliers but need a threshold to
|
|
distinguish inliers from outliers. The method LMeDS does not need any threshold but it works
|
|
correctly only when there are more than 50% of inliers. Finally, if there are no outliers and the
|
|
noise is rather small, use the default method (method=0).
|
|
|
|
The function is used to find initial intrinsic and extrinsic matrices. Homography matrix is
|
|
determined up to a scale. If \(h_{33}\) is non-zero, the matrix is normalized so that \(h_{33}=1\).
|
|
<b>Note:</b> Whenever an \(H\) matrix cannot be estimated, an empty one will be returned.
|
|
|
|
SEE:
|
|
getAffineTransform, estimateAffine2D, estimateAffinePartial2D, getPerspectiveTransform, warpPerspective,
|
|
perspectiveTransform</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="findHomography(org.opencv.core.MatOfPoint2f,org.opencv.core.MatOfPoint2f,int)">
|
|
<h3>findHomography</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">findHomography</span><wbr><span class="parameters">(<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> srcPoints,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> dstPoints,
|
|
int method)</span></div>
|
|
<div class="block">Finds a perspective transformation between two planes.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>srcPoints</code> - Coordinates of the points in the original plane, a matrix of the type CV_32FC2
|
|
or vector<Point2f> .</dd>
|
|
<dd><code>dstPoints</code> - Coordinates of the points in the target plane, a matrix of the type CV_32FC2 or
|
|
a vector<Point2f> .</dd>
|
|
<dd><code>method</code> - Method used to compute a homography matrix. The following methods are possible:
|
|
<ul>
|
|
<li>
|
|
<b>0</b> - a regular method using all the points, i.e., the least squares method
|
|
</li>
|
|
<li>
|
|
REF: RANSAC - RANSAC-based robust method
|
|
</li>
|
|
<li>
|
|
REF: LMEDS - Least-Median robust method
|
|
</li>
|
|
<li>
|
|
REF: RHO - PROSAC-based robust method
|
|
</li>
|
|
</ul>
|
|
(used in the RANSAC and RHO methods only). That is, if
|
|
\(\| \texttt{dstPoints} _i - \texttt{convertPointsHomogeneous} ( \texttt{H} \cdot \texttt{srcPoints} _i) \|_2 > \texttt{ransacReprojThreshold}\)
|
|
then the point \(i\) is considered as an outlier. If srcPoints and dstPoints are measured in pixels,
|
|
it usually makes sense to set this parameter somewhere in the range of 1 to 10.
|
|
mask values are ignored.
|
|
|
|
The function finds and returns the perspective transformation \(H\) between the source and the
|
|
destination planes:
|
|
|
|
\(s_i \vecthree{x'_i}{y'_i}{1} \sim H \vecthree{x_i}{y_i}{1}\)
|
|
|
|
so that the back-projection error
|
|
|
|
\(\sum _i \left ( x'_i- \frac{h_{11} x_i + h_{12} y_i + h_{13}}{h_{31} x_i + h_{32} y_i + h_{33}} \right )^2+ \left ( y'_i- \frac{h_{21} x_i + h_{22} y_i + h_{23}}{h_{31} x_i + h_{32} y_i + h_{33}} \right )^2\)
|
|
|
|
is minimized. If the parameter method is set to the default value 0, the function uses all the point
|
|
pairs to compute an initial homography estimate with a simple least-squares scheme.
|
|
|
|
However, if not all of the point pairs ( \(srcPoints_i\), \(dstPoints_i\) ) fit the rigid perspective
|
|
transformation (that is, there are some outliers), this initial estimate will be poor. In this case,
|
|
you can use one of the three robust methods. The methods RANSAC, LMeDS and RHO try many different
|
|
random subsets of the corresponding point pairs (of four pairs each, collinear pairs are discarded), estimate the homography matrix
|
|
using this subset and a simple least-squares algorithm, and then compute the quality/goodness of the
|
|
computed homography (which is the number of inliers for RANSAC or the least median re-projection error for
|
|
LMeDS). The best subset is then used to produce the initial estimate of the homography matrix and
|
|
the mask of inliers/outliers.
|
|
|
|
Regardless of the method, robust or not, the computed homography matrix is refined further (using
|
|
inliers only in case of a robust method) with the Levenberg-Marquardt method to reduce the
|
|
re-projection error even more.
|
|
|
|
The methods RANSAC and RHO can handle practically any ratio of outliers but need a threshold to
|
|
distinguish inliers from outliers. The method LMeDS does not need any threshold but it works
|
|
correctly only when there are more than 50% of inliers. Finally, if there are no outliers and the
|
|
noise is rather small, use the default method (method=0).
|
|
|
|
The function is used to find initial intrinsic and extrinsic matrices. Homography matrix is
|
|
determined up to a scale. If \(h_{33}\) is non-zero, the matrix is normalized so that \(h_{33}=1\).
|
|
<b>Note:</b> Whenever an \(H\) matrix cannot be estimated, an empty one will be returned.
|
|
|
|
SEE:
|
|
getAffineTransform, estimateAffine2D, estimateAffinePartial2D, getPerspectiveTransform, warpPerspective,
|
|
perspectiveTransform</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="findHomography(org.opencv.core.MatOfPoint2f,org.opencv.core.MatOfPoint2f)">
|
|
<h3>findHomography</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">findHomography</span><wbr><span class="parameters">(<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> srcPoints,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> dstPoints)</span></div>
|
|
<div class="block">Finds a perspective transformation between two planes.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>srcPoints</code> - Coordinates of the points in the original plane, a matrix of the type CV_32FC2
|
|
or vector<Point2f> .</dd>
|
|
<dd><code>dstPoints</code> - Coordinates of the points in the target plane, a matrix of the type CV_32FC2 or
|
|
a vector<Point2f> .
|
|
<ul>
|
|
<li>
|
|
<b>0</b> - a regular method using all the points, i.e., the least squares method
|
|
</li>
|
|
<li>
|
|
REF: RANSAC - RANSAC-based robust method
|
|
</li>
|
|
<li>
|
|
REF: LMEDS - Least-Median robust method
|
|
</li>
|
|
<li>
|
|
REF: RHO - PROSAC-based robust method
|
|
</li>
|
|
</ul>
|
|
(used in the RANSAC and RHO methods only). That is, if
|
|
\(\| \texttt{dstPoints} _i - \texttt{convertPointsHomogeneous} ( \texttt{H} \cdot \texttt{srcPoints} _i) \|_2 > \texttt{ransacReprojThreshold}\)
|
|
then the point \(i\) is considered as an outlier. If srcPoints and dstPoints are measured in pixels,
|
|
it usually makes sense to set this parameter somewhere in the range of 1 to 10.
|
|
mask values are ignored.
|
|
|
|
The function finds and returns the perspective transformation \(H\) between the source and the
|
|
destination planes:
|
|
|
|
\(s_i \vecthree{x'_i}{y'_i}{1} \sim H \vecthree{x_i}{y_i}{1}\)
|
|
|
|
so that the back-projection error
|
|
|
|
\(\sum _i \left ( x'_i- \frac{h_{11} x_i + h_{12} y_i + h_{13}}{h_{31} x_i + h_{32} y_i + h_{33}} \right )^2+ \left ( y'_i- \frac{h_{21} x_i + h_{22} y_i + h_{23}}{h_{31} x_i + h_{32} y_i + h_{33}} \right )^2\)
|
|
|
|
is minimized. If the parameter method is set to the default value 0, the function uses all the point
|
|
pairs to compute an initial homography estimate with a simple least-squares scheme.
|
|
|
|
However, if not all of the point pairs ( \(srcPoints_i\), \(dstPoints_i\) ) fit the rigid perspective
|
|
transformation (that is, there are some outliers), this initial estimate will be poor. In this case,
|
|
you can use one of the three robust methods. The methods RANSAC, LMeDS and RHO try many different
|
|
random subsets of the corresponding point pairs (of four pairs each, collinear pairs are discarded), estimate the homography matrix
|
|
using this subset and a simple least-squares algorithm, and then compute the quality/goodness of the
|
|
computed homography (which is the number of inliers for RANSAC or the least median re-projection error for
|
|
LMeDS). The best subset is then used to produce the initial estimate of the homography matrix and
|
|
the mask of inliers/outliers.
|
|
|
|
Regardless of the method, robust or not, the computed homography matrix is refined further (using
|
|
inliers only in case of a robust method) with the Levenberg-Marquardt method to reduce the
|
|
re-projection error even more.
|
|
|
|
The methods RANSAC and RHO can handle practically any ratio of outliers but need a threshold to
|
|
distinguish inliers from outliers. The method LMeDS does not need any threshold but it works
|
|
correctly only when there are more than 50% of inliers. Finally, if there are no outliers and the
|
|
noise is rather small, use the default method (method=0).
|
|
|
|
The function is used to find initial intrinsic and extrinsic matrices. Homography matrix is
|
|
determined up to a scale. If \(h_{33}\) is non-zero, the matrix is normalized so that \(h_{33}=1\).
|
|
<b>Note:</b> Whenever an \(H\) matrix cannot be estimated, an empty one will be returned.
|
|
|
|
SEE:
|
|
getAffineTransform, estimateAffine2D, estimateAffinePartial2D, getPerspectiveTransform, warpPerspective,
|
|
perspectiveTransform</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="findHomography(org.opencv.core.MatOfPoint2f,org.opencv.core.MatOfPoint2f,org.opencv.core.Mat,org.opencv.calib3d.UsacParams)">
|
|
<h3>findHomography</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">findHomography</span><wbr><span class="parameters">(<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> srcPoints,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> dstPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> mask,
|
|
<a href="UsacParams.html" title="class in org.opencv.calib3d">UsacParams</a> params)</span></div>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="RQDecomp3x3(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>RQDecomp3x3</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">double[]</span> <span class="element-name">RQDecomp3x3</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> src,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> mtxR,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> mtxQ,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> Qx,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> Qy,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> Qz)</span></div>
|
|
<div class="block">Computes an RQ decomposition of 3x3 matrices.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>src</code> - 3x3 input matrix.</dd>
|
|
<dd><code>mtxR</code> - Output 3x3 upper-triangular matrix.</dd>
|
|
<dd><code>mtxQ</code> - Output 3x3 orthogonal matrix.</dd>
|
|
<dd><code>Qx</code> - Optional output 3x3 rotation matrix around x-axis.</dd>
|
|
<dd><code>Qy</code> - Optional output 3x3 rotation matrix around y-axis.</dd>
|
|
<dd><code>Qz</code> - Optional output 3x3 rotation matrix around z-axis.
|
|
|
|
The function computes a RQ decomposition using the given rotations. This function is used in
|
|
#decomposeProjectionMatrix to decompose the left 3x3 submatrix of a projection matrix into a camera
|
|
and a rotation matrix.
|
|
|
|
It optionally returns three rotation matrices, one for each axis, and the three Euler angles in
|
|
degrees (as the return value) that could be used in OpenGL. Note, there is always more than one
|
|
sequence of rotations about the three principal axes that results in the same orientation of an
|
|
object, e.g. see CITE: Slabaugh . Returned three rotation matrices and corresponding three Euler angles
|
|
are only one of the possible solutions.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="RQDecomp3x3(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>RQDecomp3x3</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">double[]</span> <span class="element-name">RQDecomp3x3</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> src,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> mtxR,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> mtxQ,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> Qx,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> Qy)</span></div>
|
|
<div class="block">Computes an RQ decomposition of 3x3 matrices.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>src</code> - 3x3 input matrix.</dd>
|
|
<dd><code>mtxR</code> - Output 3x3 upper-triangular matrix.</dd>
|
|
<dd><code>mtxQ</code> - Output 3x3 orthogonal matrix.</dd>
|
|
<dd><code>Qx</code> - Optional output 3x3 rotation matrix around x-axis.</dd>
|
|
<dd><code>Qy</code> - Optional output 3x3 rotation matrix around y-axis.
|
|
|
|
The function computes a RQ decomposition using the given rotations. This function is used in
|
|
#decomposeProjectionMatrix to decompose the left 3x3 submatrix of a projection matrix into a camera
|
|
and a rotation matrix.
|
|
|
|
It optionally returns three rotation matrices, one for each axis, and the three Euler angles in
|
|
degrees (as the return value) that could be used in OpenGL. Note, there is always more than one
|
|
sequence of rotations about the three principal axes that results in the same orientation of an
|
|
object, e.g. see CITE: Slabaugh . Returned three rotation matrices and corresponding three Euler angles
|
|
are only one of the possible solutions.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="RQDecomp3x3(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>RQDecomp3x3</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">double[]</span> <span class="element-name">RQDecomp3x3</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> src,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> mtxR,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> mtxQ,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> Qx)</span></div>
|
|
<div class="block">Computes an RQ decomposition of 3x3 matrices.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>src</code> - 3x3 input matrix.</dd>
|
|
<dd><code>mtxR</code> - Output 3x3 upper-triangular matrix.</dd>
|
|
<dd><code>mtxQ</code> - Output 3x3 orthogonal matrix.</dd>
|
|
<dd><code>Qx</code> - Optional output 3x3 rotation matrix around x-axis.
|
|
|
|
The function computes a RQ decomposition using the given rotations. This function is used in
|
|
#decomposeProjectionMatrix to decompose the left 3x3 submatrix of a projection matrix into a camera
|
|
and a rotation matrix.
|
|
|
|
It optionally returns three rotation matrices, one for each axis, and the three Euler angles in
|
|
degrees (as the return value) that could be used in OpenGL. Note, there is always more than one
|
|
sequence of rotations about the three principal axes that results in the same orientation of an
|
|
object, e.g. see CITE: Slabaugh . Returned three rotation matrices and corresponding three Euler angles
|
|
are only one of the possible solutions.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="RQDecomp3x3(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>RQDecomp3x3</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">double[]</span> <span class="element-name">RQDecomp3x3</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> src,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> mtxR,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> mtxQ)</span></div>
|
|
<div class="block">Computes an RQ decomposition of 3x3 matrices.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>src</code> - 3x3 input matrix.</dd>
|
|
<dd><code>mtxR</code> - Output 3x3 upper-triangular matrix.</dd>
|
|
<dd><code>mtxQ</code> - Output 3x3 orthogonal matrix.
|
|
|
|
The function computes a RQ decomposition using the given rotations. This function is used in
|
|
#decomposeProjectionMatrix to decompose the left 3x3 submatrix of a projection matrix into a camera
|
|
and a rotation matrix.
|
|
|
|
It optionally returns three rotation matrices, one for each axis, and the three Euler angles in
|
|
degrees (as the return value) that could be used in OpenGL. Note, there is always more than one
|
|
sequence of rotations about the three principal axes that results in the same orientation of an
|
|
object, e.g. see CITE: Slabaugh . Returned three rotation matrices and corresponding three Euler angles
|
|
are only one of the possible solutions.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="decomposeProjectionMatrix(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>decomposeProjectionMatrix</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">decomposeProjectionMatrix</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> projMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rotMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> transVect,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rotMatrixX,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rotMatrixY,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rotMatrixZ,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> eulerAngles)</span></div>
|
|
<div class="block">Decomposes a projection matrix into a rotation matrix and a camera intrinsic matrix.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>projMatrix</code> - 3x4 input projection matrix P.</dd>
|
|
<dd><code>cameraMatrix</code> - Output 3x3 camera intrinsic matrix \(\cameramatrix{A}\).</dd>
|
|
<dd><code>rotMatrix</code> - Output 3x3 external rotation matrix R.</dd>
|
|
<dd><code>transVect</code> - Output 4x1 translation vector T.</dd>
|
|
<dd><code>rotMatrixX</code> - Optional 3x3 rotation matrix around x-axis.</dd>
|
|
<dd><code>rotMatrixY</code> - Optional 3x3 rotation matrix around y-axis.</dd>
|
|
<dd><code>rotMatrixZ</code> - Optional 3x3 rotation matrix around z-axis.</dd>
|
|
<dd><code>eulerAngles</code> - Optional three-element vector containing three Euler angles of rotation in
|
|
degrees.
|
|
|
|
The function computes a decomposition of a projection matrix into a calibration and a rotation
|
|
matrix and the position of a camera.
|
|
|
|
It optionally returns three rotation matrices, one for each axis, and three Euler angles that could
|
|
be used in OpenGL. Note, there is always more than one sequence of rotations about the three
|
|
principal axes that results in the same orientation of an object, e.g. see CITE: Slabaugh . Returned
|
|
three rotation matrices and corresponding three Euler angles are only one of the possible solutions.
|
|
|
|
The function is based on #RQDecomp3x3 .</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="decomposeProjectionMatrix(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>decomposeProjectionMatrix</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">decomposeProjectionMatrix</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> projMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rotMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> transVect,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rotMatrixX,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rotMatrixY,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rotMatrixZ)</span></div>
|
|
<div class="block">Decomposes a projection matrix into a rotation matrix and a camera intrinsic matrix.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>projMatrix</code> - 3x4 input projection matrix P.</dd>
|
|
<dd><code>cameraMatrix</code> - Output 3x3 camera intrinsic matrix \(\cameramatrix{A}\).</dd>
|
|
<dd><code>rotMatrix</code> - Output 3x3 external rotation matrix R.</dd>
|
|
<dd><code>transVect</code> - Output 4x1 translation vector T.</dd>
|
|
<dd><code>rotMatrixX</code> - Optional 3x3 rotation matrix around x-axis.</dd>
|
|
<dd><code>rotMatrixY</code> - Optional 3x3 rotation matrix around y-axis.</dd>
|
|
<dd><code>rotMatrixZ</code> - Optional 3x3 rotation matrix around z-axis.
|
|
degrees.
|
|
|
|
The function computes a decomposition of a projection matrix into a calibration and a rotation
|
|
matrix and the position of a camera.
|
|
|
|
It optionally returns three rotation matrices, one for each axis, and three Euler angles that could
|
|
be used in OpenGL. Note, there is always more than one sequence of rotations about the three
|
|
principal axes that results in the same orientation of an object, e.g. see CITE: Slabaugh . Returned
|
|
three rotation matrices and corresponding three Euler angles are only one of the possible solutions.
|
|
|
|
The function is based on #RQDecomp3x3 .</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="decomposeProjectionMatrix(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>decomposeProjectionMatrix</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">decomposeProjectionMatrix</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> projMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rotMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> transVect,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rotMatrixX,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rotMatrixY)</span></div>
|
|
<div class="block">Decomposes a projection matrix into a rotation matrix and a camera intrinsic matrix.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>projMatrix</code> - 3x4 input projection matrix P.</dd>
|
|
<dd><code>cameraMatrix</code> - Output 3x3 camera intrinsic matrix \(\cameramatrix{A}\).</dd>
|
|
<dd><code>rotMatrix</code> - Output 3x3 external rotation matrix R.</dd>
|
|
<dd><code>transVect</code> - Output 4x1 translation vector T.</dd>
|
|
<dd><code>rotMatrixX</code> - Optional 3x3 rotation matrix around x-axis.</dd>
|
|
<dd><code>rotMatrixY</code> - Optional 3x3 rotation matrix around y-axis.
|
|
degrees.
|
|
|
|
The function computes a decomposition of a projection matrix into a calibration and a rotation
|
|
matrix and the position of a camera.
|
|
|
|
It optionally returns three rotation matrices, one for each axis, and three Euler angles that could
|
|
be used in OpenGL. Note, there is always more than one sequence of rotations about the three
|
|
principal axes that results in the same orientation of an object, e.g. see CITE: Slabaugh . Returned
|
|
three rotation matrices and corresponding three Euler angles are only one of the possible solutions.
|
|
|
|
The function is based on #RQDecomp3x3 .</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="decomposeProjectionMatrix(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>decomposeProjectionMatrix</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">decomposeProjectionMatrix</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> projMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rotMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> transVect,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rotMatrixX)</span></div>
|
|
<div class="block">Decomposes a projection matrix into a rotation matrix and a camera intrinsic matrix.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>projMatrix</code> - 3x4 input projection matrix P.</dd>
|
|
<dd><code>cameraMatrix</code> - Output 3x3 camera intrinsic matrix \(\cameramatrix{A}\).</dd>
|
|
<dd><code>rotMatrix</code> - Output 3x3 external rotation matrix R.</dd>
|
|
<dd><code>transVect</code> - Output 4x1 translation vector T.</dd>
|
|
<dd><code>rotMatrixX</code> - Optional 3x3 rotation matrix around x-axis.
|
|
degrees.
|
|
|
|
The function computes a decomposition of a projection matrix into a calibration and a rotation
|
|
matrix and the position of a camera.
|
|
|
|
It optionally returns three rotation matrices, one for each axis, and three Euler angles that could
|
|
be used in OpenGL. Note, there is always more than one sequence of rotations about the three
|
|
principal axes that results in the same orientation of an object, e.g. see CITE: Slabaugh . Returned
|
|
three rotation matrices and corresponding three Euler angles are only one of the possible solutions.
|
|
|
|
The function is based on #RQDecomp3x3 .</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="decomposeProjectionMatrix(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>decomposeProjectionMatrix</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">decomposeProjectionMatrix</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> projMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rotMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> transVect)</span></div>
|
|
<div class="block">Decomposes a projection matrix into a rotation matrix and a camera intrinsic matrix.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>projMatrix</code> - 3x4 input projection matrix P.</dd>
|
|
<dd><code>cameraMatrix</code> - Output 3x3 camera intrinsic matrix \(\cameramatrix{A}\).</dd>
|
|
<dd><code>rotMatrix</code> - Output 3x3 external rotation matrix R.</dd>
|
|
<dd><code>transVect</code> - Output 4x1 translation vector T.
|
|
degrees.
|
|
|
|
The function computes a decomposition of a projection matrix into a calibration and a rotation
|
|
matrix and the position of a camera.
|
|
|
|
It optionally returns three rotation matrices, one for each axis, and three Euler angles that could
|
|
be used in OpenGL. Note, there is always more than one sequence of rotations about the three
|
|
principal axes that results in the same orientation of an object, e.g. see CITE: Slabaugh . Returned
|
|
three rotation matrices and corresponding three Euler angles are only one of the possible solutions.
|
|
|
|
The function is based on #RQDecomp3x3 .</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="matMulDeriv(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>matMulDeriv</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">matMulDeriv</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> A,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> B,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dABdA,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dABdB)</span></div>
|
|
<div class="block">Computes partial derivatives of the matrix product for each multiplied matrix.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>A</code> - First multiplied matrix.</dd>
|
|
<dd><code>B</code> - Second multiplied matrix.</dd>
|
|
<dd><code>dABdA</code> - First output derivative matrix d(A\*B)/dA of size
|
|
\(\texttt{A.rows*B.cols} \times {A.rows*A.cols}\) .</dd>
|
|
<dd><code>dABdB</code> - Second output derivative matrix d(A\*B)/dB of size
|
|
\(\texttt{A.rows*B.cols} \times {B.rows*B.cols}\) .
|
|
|
|
The function computes partial derivatives of the elements of the matrix product \(A*B\) with regard to
|
|
the elements of each of the two input matrices. The function is used to compute the Jacobian
|
|
matrices in #stereoCalibrate but can also be used in any other similar optimization function.</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="composeRT(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>composeRT</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">composeRT</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec3,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec3,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dr3dr1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dr3dt1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dr3dr2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dr3dt2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dt3dr1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dt3dt1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dt3dr2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dt3dt2)</span></div>
|
|
<div class="block">Combines two rotation-and-shift transformations.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>rvec1</code> - First rotation vector.</dd>
|
|
<dd><code>tvec1</code> - First translation vector.</dd>
|
|
<dd><code>rvec2</code> - Second rotation vector.</dd>
|
|
<dd><code>tvec2</code> - Second translation vector.</dd>
|
|
<dd><code>rvec3</code> - Output rotation vector of the superposition.</dd>
|
|
<dd><code>tvec3</code> - Output translation vector of the superposition.</dd>
|
|
<dd><code>dr3dr1</code> - Optional output derivative of rvec3 with regard to rvec1</dd>
|
|
<dd><code>dr3dt1</code> - Optional output derivative of rvec3 with regard to tvec1</dd>
|
|
<dd><code>dr3dr2</code> - Optional output derivative of rvec3 with regard to rvec2</dd>
|
|
<dd><code>dr3dt2</code> - Optional output derivative of rvec3 with regard to tvec2</dd>
|
|
<dd><code>dt3dr1</code> - Optional output derivative of tvec3 with regard to rvec1</dd>
|
|
<dd><code>dt3dt1</code> - Optional output derivative of tvec3 with regard to tvec1</dd>
|
|
<dd><code>dt3dr2</code> - Optional output derivative of tvec3 with regard to rvec2</dd>
|
|
<dd><code>dt3dt2</code> - Optional output derivative of tvec3 with regard to tvec2
|
|
|
|
The functions compute:
|
|
|
|
\(\begin{array}{l} \texttt{rvec3} = \mathrm{rodrigues} ^{-1} \left ( \mathrm{rodrigues} ( \texttt{rvec2} ) \cdot \mathrm{rodrigues} ( \texttt{rvec1} ) \right ) \\ \texttt{tvec3} = \mathrm{rodrigues} ( \texttt{rvec2} ) \cdot \texttt{tvec1} + \texttt{tvec2} \end{array} ,\)
|
|
|
|
where \(\mathrm{rodrigues}\) denotes a rotation vector to a rotation matrix transformation, and
|
|
\(\mathrm{rodrigues}^{-1}\) denotes the inverse transformation. See #Rodrigues for details.
|
|
|
|
Also, the functions can compute the derivatives of the output vectors with regards to the input
|
|
vectors (see #matMulDeriv ). The functions are used inside #stereoCalibrate but can also be used in
|
|
your own code where Levenberg-Marquardt or another gradient-based solver is used to optimize a
|
|
function that contains a matrix multiplication.</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="composeRT(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>composeRT</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">composeRT</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec3,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec3,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dr3dr1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dr3dt1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dr3dr2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dr3dt2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dt3dr1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dt3dt1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dt3dr2)</span></div>
|
|
<div class="block">Combines two rotation-and-shift transformations.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>rvec1</code> - First rotation vector.</dd>
|
|
<dd><code>tvec1</code> - First translation vector.</dd>
|
|
<dd><code>rvec2</code> - Second rotation vector.</dd>
|
|
<dd><code>tvec2</code> - Second translation vector.</dd>
|
|
<dd><code>rvec3</code> - Output rotation vector of the superposition.</dd>
|
|
<dd><code>tvec3</code> - Output translation vector of the superposition.</dd>
|
|
<dd><code>dr3dr1</code> - Optional output derivative of rvec3 with regard to rvec1</dd>
|
|
<dd><code>dr3dt1</code> - Optional output derivative of rvec3 with regard to tvec1</dd>
|
|
<dd><code>dr3dr2</code> - Optional output derivative of rvec3 with regard to rvec2</dd>
|
|
<dd><code>dr3dt2</code> - Optional output derivative of rvec3 with regard to tvec2</dd>
|
|
<dd><code>dt3dr1</code> - Optional output derivative of tvec3 with regard to rvec1</dd>
|
|
<dd><code>dt3dt1</code> - Optional output derivative of tvec3 with regard to tvec1</dd>
|
|
<dd><code>dt3dr2</code> - Optional output derivative of tvec3 with regard to rvec2
|
|
|
|
The functions compute:
|
|
|
|
\(\begin{array}{l} \texttt{rvec3} = \mathrm{rodrigues} ^{-1} \left ( \mathrm{rodrigues} ( \texttt{rvec2} ) \cdot \mathrm{rodrigues} ( \texttt{rvec1} ) \right ) \\ \texttt{tvec3} = \mathrm{rodrigues} ( \texttt{rvec2} ) \cdot \texttt{tvec1} + \texttt{tvec2} \end{array} ,\)
|
|
|
|
where \(\mathrm{rodrigues}\) denotes a rotation vector to a rotation matrix transformation, and
|
|
\(\mathrm{rodrigues}^{-1}\) denotes the inverse transformation. See #Rodrigues for details.
|
|
|
|
Also, the functions can compute the derivatives of the output vectors with regards to the input
|
|
vectors (see #matMulDeriv ). The functions are used inside #stereoCalibrate but can also be used in
|
|
your own code where Levenberg-Marquardt or another gradient-based solver is used to optimize a
|
|
function that contains a matrix multiplication.</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="composeRT(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>composeRT</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">composeRT</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec3,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec3,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dr3dr1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dr3dt1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dr3dr2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dr3dt2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dt3dr1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dt3dt1)</span></div>
|
|
<div class="block">Combines two rotation-and-shift transformations.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>rvec1</code> - First rotation vector.</dd>
|
|
<dd><code>tvec1</code> - First translation vector.</dd>
|
|
<dd><code>rvec2</code> - Second rotation vector.</dd>
|
|
<dd><code>tvec2</code> - Second translation vector.</dd>
|
|
<dd><code>rvec3</code> - Output rotation vector of the superposition.</dd>
|
|
<dd><code>tvec3</code> - Output translation vector of the superposition.</dd>
|
|
<dd><code>dr3dr1</code> - Optional output derivative of rvec3 with regard to rvec1</dd>
|
|
<dd><code>dr3dt1</code> - Optional output derivative of rvec3 with regard to tvec1</dd>
|
|
<dd><code>dr3dr2</code> - Optional output derivative of rvec3 with regard to rvec2</dd>
|
|
<dd><code>dr3dt2</code> - Optional output derivative of rvec3 with regard to tvec2</dd>
|
|
<dd><code>dt3dr1</code> - Optional output derivative of tvec3 with regard to rvec1</dd>
|
|
<dd><code>dt3dt1</code> - Optional output derivative of tvec3 with regard to tvec1
|
|
|
|
The functions compute:
|
|
|
|
\(\begin{array}{l} \texttt{rvec3} = \mathrm{rodrigues} ^{-1} \left ( \mathrm{rodrigues} ( \texttt{rvec2} ) \cdot \mathrm{rodrigues} ( \texttt{rvec1} ) \right ) \\ \texttt{tvec3} = \mathrm{rodrigues} ( \texttt{rvec2} ) \cdot \texttt{tvec1} + \texttt{tvec2} \end{array} ,\)
|
|
|
|
where \(\mathrm{rodrigues}\) denotes a rotation vector to a rotation matrix transformation, and
|
|
\(\mathrm{rodrigues}^{-1}\) denotes the inverse transformation. See #Rodrigues for details.
|
|
|
|
Also, the functions can compute the derivatives of the output vectors with regards to the input
|
|
vectors (see #matMulDeriv ). The functions are used inside #stereoCalibrate but can also be used in
|
|
your own code where Levenberg-Marquardt or another gradient-based solver is used to optimize a
|
|
function that contains a matrix multiplication.</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="composeRT(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>composeRT</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">composeRT</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec3,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec3,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dr3dr1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dr3dt1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dr3dr2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dr3dt2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dt3dr1)</span></div>
|
|
<div class="block">Combines two rotation-and-shift transformations.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>rvec1</code> - First rotation vector.</dd>
|
|
<dd><code>tvec1</code> - First translation vector.</dd>
|
|
<dd><code>rvec2</code> - Second rotation vector.</dd>
|
|
<dd><code>tvec2</code> - Second translation vector.</dd>
|
|
<dd><code>rvec3</code> - Output rotation vector of the superposition.</dd>
|
|
<dd><code>tvec3</code> - Output translation vector of the superposition.</dd>
|
|
<dd><code>dr3dr1</code> - Optional output derivative of rvec3 with regard to rvec1</dd>
|
|
<dd><code>dr3dt1</code> - Optional output derivative of rvec3 with regard to tvec1</dd>
|
|
<dd><code>dr3dr2</code> - Optional output derivative of rvec3 with regard to rvec2</dd>
|
|
<dd><code>dr3dt2</code> - Optional output derivative of rvec3 with regard to tvec2</dd>
|
|
<dd><code>dt3dr1</code> - Optional output derivative of tvec3 with regard to rvec1
|
|
|
|
The functions compute:
|
|
|
|
\(\begin{array}{l} \texttt{rvec3} = \mathrm{rodrigues} ^{-1} \left ( \mathrm{rodrigues} ( \texttt{rvec2} ) \cdot \mathrm{rodrigues} ( \texttt{rvec1} ) \right ) \\ \texttt{tvec3} = \mathrm{rodrigues} ( \texttt{rvec2} ) \cdot \texttt{tvec1} + \texttt{tvec2} \end{array} ,\)
|
|
|
|
where \(\mathrm{rodrigues}\) denotes a rotation vector to a rotation matrix transformation, and
|
|
\(\mathrm{rodrigues}^{-1}\) denotes the inverse transformation. See #Rodrigues for details.
|
|
|
|
Also, the functions can compute the derivatives of the output vectors with regards to the input
|
|
vectors (see #matMulDeriv ). The functions are used inside #stereoCalibrate but can also be used in
|
|
your own code where Levenberg-Marquardt or another gradient-based solver is used to optimize a
|
|
function that contains a matrix multiplication.</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="composeRT(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>composeRT</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">composeRT</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec3,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec3,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dr3dr1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dr3dt1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dr3dr2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dr3dt2)</span></div>
|
|
<div class="block">Combines two rotation-and-shift transformations.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>rvec1</code> - First rotation vector.</dd>
|
|
<dd><code>tvec1</code> - First translation vector.</dd>
|
|
<dd><code>rvec2</code> - Second rotation vector.</dd>
|
|
<dd><code>tvec2</code> - Second translation vector.</dd>
|
|
<dd><code>rvec3</code> - Output rotation vector of the superposition.</dd>
|
|
<dd><code>tvec3</code> - Output translation vector of the superposition.</dd>
|
|
<dd><code>dr3dr1</code> - Optional output derivative of rvec3 with regard to rvec1</dd>
|
|
<dd><code>dr3dt1</code> - Optional output derivative of rvec3 with regard to tvec1</dd>
|
|
<dd><code>dr3dr2</code> - Optional output derivative of rvec3 with regard to rvec2</dd>
|
|
<dd><code>dr3dt2</code> - Optional output derivative of rvec3 with regard to tvec2
|
|
|
|
The functions compute:
|
|
|
|
\(\begin{array}{l} \texttt{rvec3} = \mathrm{rodrigues} ^{-1} \left ( \mathrm{rodrigues} ( \texttt{rvec2} ) \cdot \mathrm{rodrigues} ( \texttt{rvec1} ) \right ) \\ \texttt{tvec3} = \mathrm{rodrigues} ( \texttt{rvec2} ) \cdot \texttt{tvec1} + \texttt{tvec2} \end{array} ,\)
|
|
|
|
where \(\mathrm{rodrigues}\) denotes a rotation vector to a rotation matrix transformation, and
|
|
\(\mathrm{rodrigues}^{-1}\) denotes the inverse transformation. See #Rodrigues for details.
|
|
|
|
Also, the functions can compute the derivatives of the output vectors with regards to the input
|
|
vectors (see #matMulDeriv ). The functions are used inside #stereoCalibrate but can also be used in
|
|
your own code where Levenberg-Marquardt or another gradient-based solver is used to optimize a
|
|
function that contains a matrix multiplication.</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="composeRT(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>composeRT</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">composeRT</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec3,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec3,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dr3dr1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dr3dt1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dr3dr2)</span></div>
|
|
<div class="block">Combines two rotation-and-shift transformations.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>rvec1</code> - First rotation vector.</dd>
|
|
<dd><code>tvec1</code> - First translation vector.</dd>
|
|
<dd><code>rvec2</code> - Second rotation vector.</dd>
|
|
<dd><code>tvec2</code> - Second translation vector.</dd>
|
|
<dd><code>rvec3</code> - Output rotation vector of the superposition.</dd>
|
|
<dd><code>tvec3</code> - Output translation vector of the superposition.</dd>
|
|
<dd><code>dr3dr1</code> - Optional output derivative of rvec3 with regard to rvec1</dd>
|
|
<dd><code>dr3dt1</code> - Optional output derivative of rvec3 with regard to tvec1</dd>
|
|
<dd><code>dr3dr2</code> - Optional output derivative of rvec3 with regard to rvec2
|
|
|
|
The functions compute:
|
|
|
|
\(\begin{array}{l} \texttt{rvec3} = \mathrm{rodrigues} ^{-1} \left ( \mathrm{rodrigues} ( \texttt{rvec2} ) \cdot \mathrm{rodrigues} ( \texttt{rvec1} ) \right ) \\ \texttt{tvec3} = \mathrm{rodrigues} ( \texttt{rvec2} ) \cdot \texttt{tvec1} + \texttt{tvec2} \end{array} ,\)
|
|
|
|
where \(\mathrm{rodrigues}\) denotes a rotation vector to a rotation matrix transformation, and
|
|
\(\mathrm{rodrigues}^{-1}\) denotes the inverse transformation. See #Rodrigues for details.
|
|
|
|
Also, the functions can compute the derivatives of the output vectors with regards to the input
|
|
vectors (see #matMulDeriv ). The functions are used inside #stereoCalibrate but can also be used in
|
|
your own code where Levenberg-Marquardt or another gradient-based solver is used to optimize a
|
|
function that contains a matrix multiplication.</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="composeRT(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>composeRT</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">composeRT</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec3,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec3,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dr3dr1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dr3dt1)</span></div>
|
|
<div class="block">Combines two rotation-and-shift transformations.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>rvec1</code> - First rotation vector.</dd>
|
|
<dd><code>tvec1</code> - First translation vector.</dd>
|
|
<dd><code>rvec2</code> - Second rotation vector.</dd>
|
|
<dd><code>tvec2</code> - Second translation vector.</dd>
|
|
<dd><code>rvec3</code> - Output rotation vector of the superposition.</dd>
|
|
<dd><code>tvec3</code> - Output translation vector of the superposition.</dd>
|
|
<dd><code>dr3dr1</code> - Optional output derivative of rvec3 with regard to rvec1</dd>
|
|
<dd><code>dr3dt1</code> - Optional output derivative of rvec3 with regard to tvec1
|
|
|
|
The functions compute:
|
|
|
|
\(\begin{array}{l} \texttt{rvec3} = \mathrm{rodrigues} ^{-1} \left ( \mathrm{rodrigues} ( \texttt{rvec2} ) \cdot \mathrm{rodrigues} ( \texttt{rvec1} ) \right ) \\ \texttt{tvec3} = \mathrm{rodrigues} ( \texttt{rvec2} ) \cdot \texttt{tvec1} + \texttt{tvec2} \end{array} ,\)
|
|
|
|
where \(\mathrm{rodrigues}\) denotes a rotation vector to a rotation matrix transformation, and
|
|
\(\mathrm{rodrigues}^{-1}\) denotes the inverse transformation. See #Rodrigues for details.
|
|
|
|
Also, the functions can compute the derivatives of the output vectors with regards to the input
|
|
vectors (see #matMulDeriv ). The functions are used inside #stereoCalibrate but can also be used in
|
|
your own code where Levenberg-Marquardt or another gradient-based solver is used to optimize a
|
|
function that contains a matrix multiplication.</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="composeRT(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>composeRT</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">composeRT</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec3,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec3,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dr3dr1)</span></div>
|
|
<div class="block">Combines two rotation-and-shift transformations.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>rvec1</code> - First rotation vector.</dd>
|
|
<dd><code>tvec1</code> - First translation vector.</dd>
|
|
<dd><code>rvec2</code> - Second rotation vector.</dd>
|
|
<dd><code>tvec2</code> - Second translation vector.</dd>
|
|
<dd><code>rvec3</code> - Output rotation vector of the superposition.</dd>
|
|
<dd><code>tvec3</code> - Output translation vector of the superposition.</dd>
|
|
<dd><code>dr3dr1</code> - Optional output derivative of rvec3 with regard to rvec1
|
|
|
|
The functions compute:
|
|
|
|
\(\begin{array}{l} \texttt{rvec3} = \mathrm{rodrigues} ^{-1} \left ( \mathrm{rodrigues} ( \texttt{rvec2} ) \cdot \mathrm{rodrigues} ( \texttt{rvec1} ) \right ) \\ \texttt{tvec3} = \mathrm{rodrigues} ( \texttt{rvec2} ) \cdot \texttt{tvec1} + \texttt{tvec2} \end{array} ,\)
|
|
|
|
where \(\mathrm{rodrigues}\) denotes a rotation vector to a rotation matrix transformation, and
|
|
\(\mathrm{rodrigues}^{-1}\) denotes the inverse transformation. See #Rodrigues for details.
|
|
|
|
Also, the functions can compute the derivatives of the output vectors with regards to the input
|
|
vectors (see #matMulDeriv ). The functions are used inside #stereoCalibrate but can also be used in
|
|
your own code where Levenberg-Marquardt or another gradient-based solver is used to optimize a
|
|
function that contains a matrix multiplication.</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="composeRT(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>composeRT</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">composeRT</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec3,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec3)</span></div>
|
|
<div class="block">Combines two rotation-and-shift transformations.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>rvec1</code> - First rotation vector.</dd>
|
|
<dd><code>tvec1</code> - First translation vector.</dd>
|
|
<dd><code>rvec2</code> - Second rotation vector.</dd>
|
|
<dd><code>tvec2</code> - Second translation vector.</dd>
|
|
<dd><code>rvec3</code> - Output rotation vector of the superposition.</dd>
|
|
<dd><code>tvec3</code> - Output translation vector of the superposition.
|
|
|
|
The functions compute:
|
|
|
|
\(\begin{array}{l} \texttt{rvec3} = \mathrm{rodrigues} ^{-1} \left ( \mathrm{rodrigues} ( \texttt{rvec2} ) \cdot \mathrm{rodrigues} ( \texttt{rvec1} ) \right ) \\ \texttt{tvec3} = \mathrm{rodrigues} ( \texttt{rvec2} ) \cdot \texttt{tvec1} + \texttt{tvec2} \end{array} ,\)
|
|
|
|
where \(\mathrm{rodrigues}\) denotes a rotation vector to a rotation matrix transformation, and
|
|
\(\mathrm{rodrigues}^{-1}\) denotes the inverse transformation. See #Rodrigues for details.
|
|
|
|
Also, the functions can compute the derivatives of the output vectors with regards to the input
|
|
vectors (see #matMulDeriv ). The functions are used inside #stereoCalibrate but can also be used in
|
|
your own code where Levenberg-Marquardt or another gradient-based solver is used to optimize a
|
|
function that contains a matrix multiplication.</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="projectPoints(org.opencv.core.MatOfPoint3f,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.MatOfDouble,org.opencv.core.MatOfPoint2f,org.opencv.core.Mat,double)">
|
|
<h3>projectPoints</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">projectPoints</span><wbr><span class="parameters">(<a href="../core/MatOfPoint3f.html" title="class in org.opencv.core">MatOfPoint3f</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/MatOfDouble.html" title="class in org.opencv.core">MatOfDouble</a> distCoeffs,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> jacobian,
|
|
double aspectRatio)</span></div>
|
|
<div class="block">Projects 3D points to an image plane.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - Array of object points expressed wrt. the world coordinate frame. A 3xN/Nx3
|
|
1-channel or 1xN/Nx1 3-channel (or vector<Point3f> ), where N is the number of points in the view.</dd>
|
|
<dd><code>rvec</code> - The rotation vector (REF: Rodrigues) that, together with tvec, performs a change of
|
|
basis from world to camera coordinate system, see REF: calibrateCamera for details.</dd>
|
|
<dd><code>tvec</code> - The translation vector, see parameter description above.</dd>
|
|
<dd><code>cameraMatrix</code> - Camera intrinsic matrix \(\cameramatrix{A}\) .</dd>
|
|
<dd><code>distCoeffs</code> - Input vector of distortion coefficients
|
|
\(\distcoeffs\) . If the vector is empty, the zero distortion coefficients are assumed.</dd>
|
|
<dd><code>imagePoints</code> - Output array of image points, 1xN/Nx1 2-channel, or
|
|
vector<Point2f> .</dd>
|
|
<dd><code>jacobian</code> - Optional output 2Nx(10+<numDistCoeffs>) jacobian matrix of derivatives of image
|
|
points with respect to components of the rotation vector, translation vector, focal lengths,
|
|
coordinates of the principal point and the distortion coefficients. In the old interface different
|
|
components of the jacobian are returned via different output parameters.</dd>
|
|
<dd><code>aspectRatio</code> - Optional "fixed aspect ratio" parameter. If the parameter is not 0, the
|
|
function assumes that the aspect ratio (\(f_x / f_y\)) is fixed and correspondingly adjusts the
|
|
jacobian matrix.
|
|
|
|
The function computes the 2D projections of 3D points to the image plane, given intrinsic and
|
|
extrinsic camera parameters. Optionally, the function computes Jacobians -matrices of partial
|
|
derivatives of image points coordinates (as functions of all the input parameters) with respect to
|
|
the particular parameters, intrinsic and/or extrinsic. The Jacobians are used during the global
|
|
optimization in REF: calibrateCamera, REF: solvePnP, and REF: stereoCalibrate. The function itself
|
|
can also be used to compute a re-projection error, given the current intrinsic and extrinsic
|
|
parameters.
|
|
|
|
<b>Note:</b> By setting rvec = tvec = \([0, 0, 0]\), or by setting cameraMatrix to a 3x3 identity matrix,
|
|
or by passing zero distortion coefficients, one can get various useful partial cases of the
|
|
function. This means, one can compute the distorted coordinates for a sparse set of points or apply
|
|
a perspective transformation (and also compute the derivatives) in the ideal zero-distortion setup.</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="projectPoints(org.opencv.core.MatOfPoint3f,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.MatOfDouble,org.opencv.core.MatOfPoint2f,org.opencv.core.Mat)">
|
|
<h3>projectPoints</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">projectPoints</span><wbr><span class="parameters">(<a href="../core/MatOfPoint3f.html" title="class in org.opencv.core">MatOfPoint3f</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/MatOfDouble.html" title="class in org.opencv.core">MatOfDouble</a> distCoeffs,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> jacobian)</span></div>
|
|
<div class="block">Projects 3D points to an image plane.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - Array of object points expressed wrt. the world coordinate frame. A 3xN/Nx3
|
|
1-channel or 1xN/Nx1 3-channel (or vector<Point3f> ), where N is the number of points in the view.</dd>
|
|
<dd><code>rvec</code> - The rotation vector (REF: Rodrigues) that, together with tvec, performs a change of
|
|
basis from world to camera coordinate system, see REF: calibrateCamera for details.</dd>
|
|
<dd><code>tvec</code> - The translation vector, see parameter description above.</dd>
|
|
<dd><code>cameraMatrix</code> - Camera intrinsic matrix \(\cameramatrix{A}\) .</dd>
|
|
<dd><code>distCoeffs</code> - Input vector of distortion coefficients
|
|
\(\distcoeffs\) . If the vector is empty, the zero distortion coefficients are assumed.</dd>
|
|
<dd><code>imagePoints</code> - Output array of image points, 1xN/Nx1 2-channel, or
|
|
vector<Point2f> .</dd>
|
|
<dd><code>jacobian</code> - Optional output 2Nx(10+<numDistCoeffs>) jacobian matrix of derivatives of image
|
|
points with respect to components of the rotation vector, translation vector, focal lengths,
|
|
coordinates of the principal point and the distortion coefficients. In the old interface different
|
|
components of the jacobian are returned via different output parameters.
|
|
function assumes that the aspect ratio (\(f_x / f_y\)) is fixed and correspondingly adjusts the
|
|
jacobian matrix.
|
|
|
|
The function computes the 2D projections of 3D points to the image plane, given intrinsic and
|
|
extrinsic camera parameters. Optionally, the function computes Jacobians -matrices of partial
|
|
derivatives of image points coordinates (as functions of all the input parameters) with respect to
|
|
the particular parameters, intrinsic and/or extrinsic. The Jacobians are used during the global
|
|
optimization in REF: calibrateCamera, REF: solvePnP, and REF: stereoCalibrate. The function itself
|
|
can also be used to compute a re-projection error, given the current intrinsic and extrinsic
|
|
parameters.
|
|
|
|
<b>Note:</b> By setting rvec = tvec = \([0, 0, 0]\), or by setting cameraMatrix to a 3x3 identity matrix,
|
|
or by passing zero distortion coefficients, one can get various useful partial cases of the
|
|
function. This means, one can compute the distorted coordinates for a sparse set of points or apply
|
|
a perspective transformation (and also compute the derivatives) in the ideal zero-distortion setup.</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="projectPoints(org.opencv.core.MatOfPoint3f,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.MatOfDouble,org.opencv.core.MatOfPoint2f)">
|
|
<h3>projectPoints</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">projectPoints</span><wbr><span class="parameters">(<a href="../core/MatOfPoint3f.html" title="class in org.opencv.core">MatOfPoint3f</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/MatOfDouble.html" title="class in org.opencv.core">MatOfDouble</a> distCoeffs,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> imagePoints)</span></div>
|
|
<div class="block">Projects 3D points to an image plane.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - Array of object points expressed wrt. the world coordinate frame. A 3xN/Nx3
|
|
1-channel or 1xN/Nx1 3-channel (or vector<Point3f> ), where N is the number of points in the view.</dd>
|
|
<dd><code>rvec</code> - The rotation vector (REF: Rodrigues) that, together with tvec, performs a change of
|
|
basis from world to camera coordinate system, see REF: calibrateCamera for details.</dd>
|
|
<dd><code>tvec</code> - The translation vector, see parameter description above.</dd>
|
|
<dd><code>cameraMatrix</code> - Camera intrinsic matrix \(\cameramatrix{A}\) .</dd>
|
|
<dd><code>distCoeffs</code> - Input vector of distortion coefficients
|
|
\(\distcoeffs\) . If the vector is empty, the zero distortion coefficients are assumed.</dd>
|
|
<dd><code>imagePoints</code> - Output array of image points, 1xN/Nx1 2-channel, or
|
|
vector<Point2f> .
|
|
points with respect to components of the rotation vector, translation vector, focal lengths,
|
|
coordinates of the principal point and the distortion coefficients. In the old interface different
|
|
components of the jacobian are returned via different output parameters.
|
|
function assumes that the aspect ratio (\(f_x / f_y\)) is fixed and correspondingly adjusts the
|
|
jacobian matrix.
|
|
|
|
The function computes the 2D projections of 3D points to the image plane, given intrinsic and
|
|
extrinsic camera parameters. Optionally, the function computes Jacobians -matrices of partial
|
|
derivatives of image points coordinates (as functions of all the input parameters) with respect to
|
|
the particular parameters, intrinsic and/or extrinsic. The Jacobians are used during the global
|
|
optimization in REF: calibrateCamera, REF: solvePnP, and REF: stereoCalibrate. The function itself
|
|
can also be used to compute a re-projection error, given the current intrinsic and extrinsic
|
|
parameters.
|
|
|
|
<b>Note:</b> By setting rvec = tvec = \([0, 0, 0]\), or by setting cameraMatrix to a 3x3 identity matrix,
|
|
or by passing zero distortion coefficients, one can get various useful partial cases of the
|
|
function. This means, one can compute the distorted coordinates for a sparse set of points or apply
|
|
a perspective transformation (and also compute the derivatives) in the ideal zero-distortion setup.</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="solvePnP(org.opencv.core.MatOfPoint3f,org.opencv.core.MatOfPoint2f,org.opencv.core.Mat,org.opencv.core.MatOfDouble,org.opencv.core.Mat,org.opencv.core.Mat,boolean,int)">
|
|
<h3>solvePnP</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">boolean</span> <span class="element-name">solvePnP</span><wbr><span class="parameters">(<a href="../core/MatOfPoint3f.html" title="class in org.opencv.core">MatOfPoint3f</a> objectPoints,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/MatOfDouble.html" title="class in org.opencv.core">MatOfDouble</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
boolean useExtrinsicGuess,
|
|
int flags)</span></div>
|
|
<div class="block">Finds an object pose \( {}^{c}\mathbf{T}_o \) from 3D-2D point correspondences:
|
|
|
|
{ width=50% }
|
|
|
|
SEE: REF: calib3d_solvePnP
|
|
|
|
This function returns the rotation and the translation vectors that transform a 3D point expressed in the object
|
|
coordinate frame to the camera coordinate frame, using different methods:
|
|
<ul>
|
|
<li>
|
|
P3P methods (REF: SOLVEPNP_P3P, REF: SOLVEPNP_AP3P): need 4 input points to return a unique solution.
|
|
</li>
|
|
<li>
|
|
REF: SOLVEPNP_IPPE Input points must be >= 4 and object points must be coplanar.
|
|
</li>
|
|
<li>
|
|
REF: SOLVEPNP_IPPE_SQUARE Special case suitable for marker pose estimation.
|
|
Number of input points must be 4. Object points must be defined in the following order:
|
|
<ul>
|
|
<li>
|
|
point 0: [-squareLength / 2, squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 1: [ squareLength / 2, squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 2: [ squareLength / 2, -squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 3: [-squareLength / 2, -squareLength / 2, 0]
|
|
</li>
|
|
</ul>
|
|
<li>
|
|
for all the other flags, number of input points must be >= 4 and object points can be in any configuration.
|
|
</li>
|
|
</ul></div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - Array of object points in the object coordinate space, Nx3 1-channel or
|
|
1xN/Nx1 3-channel, where N is the number of points. vector<Point3d> can be also passed here.</dd>
|
|
<dd><code>imagePoints</code> - Array of corresponding image points, Nx2 1-channel or 1xN/Nx1 2-channel,
|
|
where N is the number of points. vector<Point2d> can be also passed here.</dd>
|
|
<dd><code>cameraMatrix</code> - Input camera intrinsic matrix \(\cameramatrix{A}\) .</dd>
|
|
<dd><code>distCoeffs</code> - Input vector of distortion coefficients
|
|
\(\distcoeffs\). If the vector is NULL/empty, the zero distortion coefficients are
|
|
assumed.</dd>
|
|
<dd><code>rvec</code> - Output rotation vector (see REF: Rodrigues ) that, together with tvec, brings points from
|
|
the model coordinate system to the camera coordinate system.</dd>
|
|
<dd><code>tvec</code> - Output translation vector.</dd>
|
|
<dd><code>useExtrinsicGuess</code> - Parameter used for #SOLVEPNP_ITERATIVE. If true (1), the function uses
|
|
the provided rvec and tvec values as initial approximations of the rotation and translation
|
|
vectors, respectively, and further optimizes them.</dd>
|
|
<dd><code>flags</code> - Method for solving a PnP problem: see REF: calib3d_solvePnP_flags
|
|
|
|
More information about Perspective-n-Points is described in REF: calib3d_solvePnP
|
|
|
|
<b>Note:</b>
|
|
<ul>
|
|
<li>
|
|
An example of how to use solvePnP for planar augmented reality can be found at
|
|
opencv_source_code/samples/python/plane_ar.py
|
|
</li>
|
|
<li>
|
|
If you are using Python:
|
|
<ul>
|
|
<li>
|
|
Numpy array slices won't work as input because solvePnP requires contiguous
|
|
arrays (enforced by the assertion using cv::Mat::checkVector() around line 55 of
|
|
modules/calib3d/src/solvepnp.cpp version 2.4.9)
|
|
</li>
|
|
<li>
|
|
The P3P algorithm requires image points to be in an array of shape (N,1,2) due
|
|
to its calling of #undistortPoints (around line 75 of modules/calib3d/src/solvepnp.cpp version 2.4.9)
|
|
which requires 2-channel information.
|
|
</li>
|
|
<li>
|
|
Thus, given some data D = np.array(...) where D.shape = (N,M), in order to use a subset of
|
|
it as, e.g., imagePoints, one must effectively copy it into a new array: imagePoints =
|
|
np.ascontiguousarray(D[:,:2]).reshape((N,1,2))
|
|
</li>
|
|
</ul>
|
|
<li>
|
|
The methods REF: SOLVEPNP_DLS and REF: SOLVEPNP_UPNP cannot be used as the current implementations are
|
|
unstable and sometimes give completely wrong results. If you pass one of these two
|
|
flags, REF: SOLVEPNP_EPNP method will be used instead.
|
|
</li>
|
|
<li>
|
|
The minimum number of points is 4 in the general case. In the case of REF: SOLVEPNP_P3P and REF: SOLVEPNP_AP3P
|
|
methods, it is required to use exactly 4 points (the first 3 points are used to estimate all the solutions
|
|
of the P3P problem, the last one is used to retain the best solution that minimizes the reprojection error).
|
|
</li>
|
|
<li>
|
|
With REF: SOLVEPNP_ITERATIVE method and <code>useExtrinsicGuess=true</code>, the minimum number of points is 3 (3 points
|
|
are sufficient to compute a pose but there are up to 4 solutions). The initial solution should be close to the
|
|
global solution to converge.
|
|
</li>
|
|
<li>
|
|
With REF: SOLVEPNP_IPPE input points must be >= 4 and object points must be coplanar.
|
|
</li>
|
|
<li>
|
|
With REF: SOLVEPNP_IPPE_SQUARE this is a special case suitable for marker pose estimation.
|
|
Number of input points must be 4. Object points must be defined in the following order:
|
|
<ul>
|
|
<li>
|
|
point 0: [-squareLength / 2, squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 1: [ squareLength / 2, squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 2: [ squareLength / 2, -squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 3: [-squareLength / 2, -squareLength / 2, 0]
|
|
</li>
|
|
</ul>
|
|
<li>
|
|
With REF: SOLVEPNP_SQPNP input points must be >= 3
|
|
</li>
|
|
</ul></dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="solvePnP(org.opencv.core.MatOfPoint3f,org.opencv.core.MatOfPoint2f,org.opencv.core.Mat,org.opencv.core.MatOfDouble,org.opencv.core.Mat,org.opencv.core.Mat,boolean)">
|
|
<h3>solvePnP</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">boolean</span> <span class="element-name">solvePnP</span><wbr><span class="parameters">(<a href="../core/MatOfPoint3f.html" title="class in org.opencv.core">MatOfPoint3f</a> objectPoints,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/MatOfDouble.html" title="class in org.opencv.core">MatOfDouble</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
boolean useExtrinsicGuess)</span></div>
|
|
<div class="block">Finds an object pose \( {}^{c}\mathbf{T}_o \) from 3D-2D point correspondences:
|
|
|
|
{ width=50% }
|
|
|
|
SEE: REF: calib3d_solvePnP
|
|
|
|
This function returns the rotation and the translation vectors that transform a 3D point expressed in the object
|
|
coordinate frame to the camera coordinate frame, using different methods:
|
|
<ul>
|
|
<li>
|
|
P3P methods (REF: SOLVEPNP_P3P, REF: SOLVEPNP_AP3P): need 4 input points to return a unique solution.
|
|
</li>
|
|
<li>
|
|
REF: SOLVEPNP_IPPE Input points must be >= 4 and object points must be coplanar.
|
|
</li>
|
|
<li>
|
|
REF: SOLVEPNP_IPPE_SQUARE Special case suitable for marker pose estimation.
|
|
Number of input points must be 4. Object points must be defined in the following order:
|
|
<ul>
|
|
<li>
|
|
point 0: [-squareLength / 2, squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 1: [ squareLength / 2, squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 2: [ squareLength / 2, -squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 3: [-squareLength / 2, -squareLength / 2, 0]
|
|
</li>
|
|
</ul>
|
|
<li>
|
|
for all the other flags, number of input points must be >= 4 and object points can be in any configuration.
|
|
</li>
|
|
</ul></div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - Array of object points in the object coordinate space, Nx3 1-channel or
|
|
1xN/Nx1 3-channel, where N is the number of points. vector<Point3d> can be also passed here.</dd>
|
|
<dd><code>imagePoints</code> - Array of corresponding image points, Nx2 1-channel or 1xN/Nx1 2-channel,
|
|
where N is the number of points. vector<Point2d> can be also passed here.</dd>
|
|
<dd><code>cameraMatrix</code> - Input camera intrinsic matrix \(\cameramatrix{A}\) .</dd>
|
|
<dd><code>distCoeffs</code> - Input vector of distortion coefficients
|
|
\(\distcoeffs\). If the vector is NULL/empty, the zero distortion coefficients are
|
|
assumed.</dd>
|
|
<dd><code>rvec</code> - Output rotation vector (see REF: Rodrigues ) that, together with tvec, brings points from
|
|
the model coordinate system to the camera coordinate system.</dd>
|
|
<dd><code>tvec</code> - Output translation vector.</dd>
|
|
<dd><code>useExtrinsicGuess</code> - Parameter used for #SOLVEPNP_ITERATIVE. If true (1), the function uses
|
|
the provided rvec and tvec values as initial approximations of the rotation and translation
|
|
vectors, respectively, and further optimizes them.
|
|
|
|
More information about Perspective-n-Points is described in REF: calib3d_solvePnP
|
|
|
|
<b>Note:</b>
|
|
<ul>
|
|
<li>
|
|
An example of how to use solvePnP for planar augmented reality can be found at
|
|
opencv_source_code/samples/python/plane_ar.py
|
|
</li>
|
|
<li>
|
|
If you are using Python:
|
|
<ul>
|
|
<li>
|
|
Numpy array slices won't work as input because solvePnP requires contiguous
|
|
arrays (enforced by the assertion using cv::Mat::checkVector() around line 55 of
|
|
modules/calib3d/src/solvepnp.cpp version 2.4.9)
|
|
</li>
|
|
<li>
|
|
The P3P algorithm requires image points to be in an array of shape (N,1,2) due
|
|
to its calling of #undistortPoints (around line 75 of modules/calib3d/src/solvepnp.cpp version 2.4.9)
|
|
which requires 2-channel information.
|
|
</li>
|
|
<li>
|
|
Thus, given some data D = np.array(...) where D.shape = (N,M), in order to use a subset of
|
|
it as, e.g., imagePoints, one must effectively copy it into a new array: imagePoints =
|
|
np.ascontiguousarray(D[:,:2]).reshape((N,1,2))
|
|
</li>
|
|
</ul>
|
|
<li>
|
|
The methods REF: SOLVEPNP_DLS and REF: SOLVEPNP_UPNP cannot be used as the current implementations are
|
|
unstable and sometimes give completely wrong results. If you pass one of these two
|
|
flags, REF: SOLVEPNP_EPNP method will be used instead.
|
|
</li>
|
|
<li>
|
|
The minimum number of points is 4 in the general case. In the case of REF: SOLVEPNP_P3P and REF: SOLVEPNP_AP3P
|
|
methods, it is required to use exactly 4 points (the first 3 points are used to estimate all the solutions
|
|
of the P3P problem, the last one is used to retain the best solution that minimizes the reprojection error).
|
|
</li>
|
|
<li>
|
|
With REF: SOLVEPNP_ITERATIVE method and <code>useExtrinsicGuess=true</code>, the minimum number of points is 3 (3 points
|
|
are sufficient to compute a pose but there are up to 4 solutions). The initial solution should be close to the
|
|
global solution to converge.
|
|
</li>
|
|
<li>
|
|
With REF: SOLVEPNP_IPPE input points must be >= 4 and object points must be coplanar.
|
|
</li>
|
|
<li>
|
|
With REF: SOLVEPNP_IPPE_SQUARE this is a special case suitable for marker pose estimation.
|
|
Number of input points must be 4. Object points must be defined in the following order:
|
|
<ul>
|
|
<li>
|
|
point 0: [-squareLength / 2, squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 1: [ squareLength / 2, squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 2: [ squareLength / 2, -squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 3: [-squareLength / 2, -squareLength / 2, 0]
|
|
</li>
|
|
</ul>
|
|
<li>
|
|
With REF: SOLVEPNP_SQPNP input points must be >= 3
|
|
</li>
|
|
</ul></dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="solvePnP(org.opencv.core.MatOfPoint3f,org.opencv.core.MatOfPoint2f,org.opencv.core.Mat,org.opencv.core.MatOfDouble,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>solvePnP</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">boolean</span> <span class="element-name">solvePnP</span><wbr><span class="parameters">(<a href="../core/MatOfPoint3f.html" title="class in org.opencv.core">MatOfPoint3f</a> objectPoints,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/MatOfDouble.html" title="class in org.opencv.core">MatOfDouble</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec)</span></div>
|
|
<div class="block">Finds an object pose \( {}^{c}\mathbf{T}_o \) from 3D-2D point correspondences:
|
|
|
|
{ width=50% }
|
|
|
|
SEE: REF: calib3d_solvePnP
|
|
|
|
This function returns the rotation and the translation vectors that transform a 3D point expressed in the object
|
|
coordinate frame to the camera coordinate frame, using different methods:
|
|
<ul>
|
|
<li>
|
|
P3P methods (REF: SOLVEPNP_P3P, REF: SOLVEPNP_AP3P): need 4 input points to return a unique solution.
|
|
</li>
|
|
<li>
|
|
REF: SOLVEPNP_IPPE Input points must be >= 4 and object points must be coplanar.
|
|
</li>
|
|
<li>
|
|
REF: SOLVEPNP_IPPE_SQUARE Special case suitable for marker pose estimation.
|
|
Number of input points must be 4. Object points must be defined in the following order:
|
|
<ul>
|
|
<li>
|
|
point 0: [-squareLength / 2, squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 1: [ squareLength / 2, squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 2: [ squareLength / 2, -squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 3: [-squareLength / 2, -squareLength / 2, 0]
|
|
</li>
|
|
</ul>
|
|
<li>
|
|
for all the other flags, number of input points must be >= 4 and object points can be in any configuration.
|
|
</li>
|
|
</ul></div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - Array of object points in the object coordinate space, Nx3 1-channel or
|
|
1xN/Nx1 3-channel, where N is the number of points. vector<Point3d> can be also passed here.</dd>
|
|
<dd><code>imagePoints</code> - Array of corresponding image points, Nx2 1-channel or 1xN/Nx1 2-channel,
|
|
where N is the number of points. vector<Point2d> can be also passed here.</dd>
|
|
<dd><code>cameraMatrix</code> - Input camera intrinsic matrix \(\cameramatrix{A}\) .</dd>
|
|
<dd><code>distCoeffs</code> - Input vector of distortion coefficients
|
|
\(\distcoeffs\). If the vector is NULL/empty, the zero distortion coefficients are
|
|
assumed.</dd>
|
|
<dd><code>rvec</code> - Output rotation vector (see REF: Rodrigues ) that, together with tvec, brings points from
|
|
the model coordinate system to the camera coordinate system.</dd>
|
|
<dd><code>tvec</code> - Output translation vector.
|
|
the provided rvec and tvec values as initial approximations of the rotation and translation
|
|
vectors, respectively, and further optimizes them.
|
|
|
|
More information about Perspective-n-Points is described in REF: calib3d_solvePnP
|
|
|
|
<b>Note:</b>
|
|
<ul>
|
|
<li>
|
|
An example of how to use solvePnP for planar augmented reality can be found at
|
|
opencv_source_code/samples/python/plane_ar.py
|
|
</li>
|
|
<li>
|
|
If you are using Python:
|
|
<ul>
|
|
<li>
|
|
Numpy array slices won't work as input because solvePnP requires contiguous
|
|
arrays (enforced by the assertion using cv::Mat::checkVector() around line 55 of
|
|
modules/calib3d/src/solvepnp.cpp version 2.4.9)
|
|
</li>
|
|
<li>
|
|
The P3P algorithm requires image points to be in an array of shape (N,1,2) due
|
|
to its calling of #undistortPoints (around line 75 of modules/calib3d/src/solvepnp.cpp version 2.4.9)
|
|
which requires 2-channel information.
|
|
</li>
|
|
<li>
|
|
Thus, given some data D = np.array(...) where D.shape = (N,M), in order to use a subset of
|
|
it as, e.g., imagePoints, one must effectively copy it into a new array: imagePoints =
|
|
np.ascontiguousarray(D[:,:2]).reshape((N,1,2))
|
|
</li>
|
|
</ul>
|
|
<li>
|
|
The methods REF: SOLVEPNP_DLS and REF: SOLVEPNP_UPNP cannot be used as the current implementations are
|
|
unstable and sometimes give completely wrong results. If you pass one of these two
|
|
flags, REF: SOLVEPNP_EPNP method will be used instead.
|
|
</li>
|
|
<li>
|
|
The minimum number of points is 4 in the general case. In the case of REF: SOLVEPNP_P3P and REF: SOLVEPNP_AP3P
|
|
methods, it is required to use exactly 4 points (the first 3 points are used to estimate all the solutions
|
|
of the P3P problem, the last one is used to retain the best solution that minimizes the reprojection error).
|
|
</li>
|
|
<li>
|
|
With REF: SOLVEPNP_ITERATIVE method and <code>useExtrinsicGuess=true</code>, the minimum number of points is 3 (3 points
|
|
are sufficient to compute a pose but there are up to 4 solutions). The initial solution should be close to the
|
|
global solution to converge.
|
|
</li>
|
|
<li>
|
|
With REF: SOLVEPNP_IPPE input points must be >= 4 and object points must be coplanar.
|
|
</li>
|
|
<li>
|
|
With REF: SOLVEPNP_IPPE_SQUARE this is a special case suitable for marker pose estimation.
|
|
Number of input points must be 4. Object points must be defined in the following order:
|
|
<ul>
|
|
<li>
|
|
point 0: [-squareLength / 2, squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 1: [ squareLength / 2, squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 2: [ squareLength / 2, -squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 3: [-squareLength / 2, -squareLength / 2, 0]
|
|
</li>
|
|
</ul>
|
|
<li>
|
|
With REF: SOLVEPNP_SQPNP input points must be >= 3
|
|
</li>
|
|
</ul></dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="solvePnPRansac(org.opencv.core.MatOfPoint3f,org.opencv.core.MatOfPoint2f,org.opencv.core.Mat,org.opencv.core.MatOfDouble,org.opencv.core.Mat,org.opencv.core.Mat,boolean,int,float,double,org.opencv.core.Mat,int)">
|
|
<h3>solvePnPRansac</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">boolean</span> <span class="element-name">solvePnPRansac</span><wbr><span class="parameters">(<a href="../core/MatOfPoint3f.html" title="class in org.opencv.core">MatOfPoint3f</a> objectPoints,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/MatOfDouble.html" title="class in org.opencv.core">MatOfDouble</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
boolean useExtrinsicGuess,
|
|
int iterationsCount,
|
|
float reprojectionError,
|
|
double confidence,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> inliers,
|
|
int flags)</span></div>
|
|
<div class="block">Finds an object pose \( {}^{c}\mathbf{T}_o \) from 3D-2D point correspondences using the RANSAC scheme to deal with bad matches.
|
|
|
|
{ width=50% }
|
|
|
|
SEE: REF: calib3d_solvePnP</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - Array of object points in the object coordinate space, Nx3 1-channel or
|
|
1xN/Nx1 3-channel, where N is the number of points. vector<Point3d> can be also passed here.</dd>
|
|
<dd><code>imagePoints</code> - Array of corresponding image points, Nx2 1-channel or 1xN/Nx1 2-channel,
|
|
where N is the number of points. vector<Point2d> can be also passed here.</dd>
|
|
<dd><code>cameraMatrix</code> - Input camera intrinsic matrix \(\cameramatrix{A}\) .</dd>
|
|
<dd><code>distCoeffs</code> - Input vector of distortion coefficients
|
|
\(\distcoeffs\). If the vector is NULL/empty, the zero distortion coefficients are
|
|
assumed.</dd>
|
|
<dd><code>rvec</code> - Output rotation vector (see REF: Rodrigues ) that, together with tvec, brings points from
|
|
the model coordinate system to the camera coordinate system.</dd>
|
|
<dd><code>tvec</code> - Output translation vector.</dd>
|
|
<dd><code>useExtrinsicGuess</code> - Parameter used for REF: SOLVEPNP_ITERATIVE. If true (1), the function uses
|
|
the provided rvec and tvec values as initial approximations of the rotation and translation
|
|
vectors, respectively, and further optimizes them.</dd>
|
|
<dd><code>iterationsCount</code> - Number of iterations.</dd>
|
|
<dd><code>reprojectionError</code> - Inlier threshold value used by the RANSAC procedure. The parameter value
|
|
is the maximum allowed distance between the observed and computed point projections to consider it
|
|
an inlier.</dd>
|
|
<dd><code>confidence</code> - The probability that the algorithm produces a useful result.</dd>
|
|
<dd><code>inliers</code> - Output vector that contains indices of inliers in objectPoints and imagePoints .</dd>
|
|
<dd><code>flags</code> - Method for solving a PnP problem (see REF: solvePnP ).
|
|
|
|
The function estimates an object pose given a set of object points, their corresponding image
|
|
projections, as well as the camera intrinsic matrix and the distortion coefficients. This function finds such
|
|
a pose that minimizes reprojection error, that is, the sum of squared distances between the observed
|
|
projections imagePoints and the projected (using REF: projectPoints ) objectPoints. The use of RANSAC
|
|
makes the function resistant to outliers.
|
|
|
|
<b>Note:</b>
|
|
<ul>
|
|
<li>
|
|
An example of how to use solvePnPRansac for object detection can be found at
|
|
REF: tutorial_real_time_pose
|
|
</li>
|
|
<li>
|
|
The default method used to estimate the camera pose for the Minimal Sample Sets step
|
|
is #SOLVEPNP_EPNP. Exceptions are:
|
|
<ul>
|
|
<li>
|
|
if you choose #SOLVEPNP_P3P or #SOLVEPNP_AP3P, these methods will be used.
|
|
</li>
|
|
<li>
|
|
if the number of input points is equal to 4, #SOLVEPNP_P3P is used.
|
|
</li>
|
|
</ul>
|
|
<li>
|
|
The method used to estimate the camera pose using all the inliers is defined by the
|
|
flags parameters unless it is equal to #SOLVEPNP_P3P or #SOLVEPNP_AP3P. In this case,
|
|
the method #SOLVEPNP_EPNP will be used instead.
|
|
</li>
|
|
</ul></dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="solvePnPRansac(org.opencv.core.MatOfPoint3f,org.opencv.core.MatOfPoint2f,org.opencv.core.Mat,org.opencv.core.MatOfDouble,org.opencv.core.Mat,org.opencv.core.Mat,boolean,int,float,double,org.opencv.core.Mat)">
|
|
<h3>solvePnPRansac</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">boolean</span> <span class="element-name">solvePnPRansac</span><wbr><span class="parameters">(<a href="../core/MatOfPoint3f.html" title="class in org.opencv.core">MatOfPoint3f</a> objectPoints,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/MatOfDouble.html" title="class in org.opencv.core">MatOfDouble</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
boolean useExtrinsicGuess,
|
|
int iterationsCount,
|
|
float reprojectionError,
|
|
double confidence,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> inliers)</span></div>
|
|
<div class="block">Finds an object pose \( {}^{c}\mathbf{T}_o \) from 3D-2D point correspondences using the RANSAC scheme to deal with bad matches.
|
|
|
|
{ width=50% }
|
|
|
|
SEE: REF: calib3d_solvePnP</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - Array of object points in the object coordinate space, Nx3 1-channel or
|
|
1xN/Nx1 3-channel, where N is the number of points. vector<Point3d> can be also passed here.</dd>
|
|
<dd><code>imagePoints</code> - Array of corresponding image points, Nx2 1-channel or 1xN/Nx1 2-channel,
|
|
where N is the number of points. vector<Point2d> can be also passed here.</dd>
|
|
<dd><code>cameraMatrix</code> - Input camera intrinsic matrix \(\cameramatrix{A}\) .</dd>
|
|
<dd><code>distCoeffs</code> - Input vector of distortion coefficients
|
|
\(\distcoeffs\). If the vector is NULL/empty, the zero distortion coefficients are
|
|
assumed.</dd>
|
|
<dd><code>rvec</code> - Output rotation vector (see REF: Rodrigues ) that, together with tvec, brings points from
|
|
the model coordinate system to the camera coordinate system.</dd>
|
|
<dd><code>tvec</code> - Output translation vector.</dd>
|
|
<dd><code>useExtrinsicGuess</code> - Parameter used for REF: SOLVEPNP_ITERATIVE. If true (1), the function uses
|
|
the provided rvec and tvec values as initial approximations of the rotation and translation
|
|
vectors, respectively, and further optimizes them.</dd>
|
|
<dd><code>iterationsCount</code> - Number of iterations.</dd>
|
|
<dd><code>reprojectionError</code> - Inlier threshold value used by the RANSAC procedure. The parameter value
|
|
is the maximum allowed distance between the observed and computed point projections to consider it
|
|
an inlier.</dd>
|
|
<dd><code>confidence</code> - The probability that the algorithm produces a useful result.</dd>
|
|
<dd><code>inliers</code> - Output vector that contains indices of inliers in objectPoints and imagePoints .
|
|
|
|
The function estimates an object pose given a set of object points, their corresponding image
|
|
projections, as well as the camera intrinsic matrix and the distortion coefficients. This function finds such
|
|
a pose that minimizes reprojection error, that is, the sum of squared distances between the observed
|
|
projections imagePoints and the projected (using REF: projectPoints ) objectPoints. The use of RANSAC
|
|
makes the function resistant to outliers.
|
|
|
|
<b>Note:</b>
|
|
<ul>
|
|
<li>
|
|
An example of how to use solvePnPRansac for object detection can be found at
|
|
REF: tutorial_real_time_pose
|
|
</li>
|
|
<li>
|
|
The default method used to estimate the camera pose for the Minimal Sample Sets step
|
|
is #SOLVEPNP_EPNP. Exceptions are:
|
|
<ul>
|
|
<li>
|
|
if you choose #SOLVEPNP_P3P or #SOLVEPNP_AP3P, these methods will be used.
|
|
</li>
|
|
<li>
|
|
if the number of input points is equal to 4, #SOLVEPNP_P3P is used.
|
|
</li>
|
|
</ul>
|
|
<li>
|
|
The method used to estimate the camera pose using all the inliers is defined by the
|
|
flags parameters unless it is equal to #SOLVEPNP_P3P or #SOLVEPNP_AP3P. In this case,
|
|
the method #SOLVEPNP_EPNP will be used instead.
|
|
</li>
|
|
</ul></dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="solvePnPRansac(org.opencv.core.MatOfPoint3f,org.opencv.core.MatOfPoint2f,org.opencv.core.Mat,org.opencv.core.MatOfDouble,org.opencv.core.Mat,org.opencv.core.Mat,boolean,int,float,double)">
|
|
<h3>solvePnPRansac</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">boolean</span> <span class="element-name">solvePnPRansac</span><wbr><span class="parameters">(<a href="../core/MatOfPoint3f.html" title="class in org.opencv.core">MatOfPoint3f</a> objectPoints,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/MatOfDouble.html" title="class in org.opencv.core">MatOfDouble</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
boolean useExtrinsicGuess,
|
|
int iterationsCount,
|
|
float reprojectionError,
|
|
double confidence)</span></div>
|
|
<div class="block">Finds an object pose \( {}^{c}\mathbf{T}_o \) from 3D-2D point correspondences using the RANSAC scheme to deal with bad matches.
|
|
|
|
{ width=50% }
|
|
|
|
SEE: REF: calib3d_solvePnP</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - Array of object points in the object coordinate space, Nx3 1-channel or
|
|
1xN/Nx1 3-channel, where N is the number of points. vector<Point3d> can be also passed here.</dd>
|
|
<dd><code>imagePoints</code> - Array of corresponding image points, Nx2 1-channel or 1xN/Nx1 2-channel,
|
|
where N is the number of points. vector<Point2d> can be also passed here.</dd>
|
|
<dd><code>cameraMatrix</code> - Input camera intrinsic matrix \(\cameramatrix{A}\) .</dd>
|
|
<dd><code>distCoeffs</code> - Input vector of distortion coefficients
|
|
\(\distcoeffs\). If the vector is NULL/empty, the zero distortion coefficients are
|
|
assumed.</dd>
|
|
<dd><code>rvec</code> - Output rotation vector (see REF: Rodrigues ) that, together with tvec, brings points from
|
|
the model coordinate system to the camera coordinate system.</dd>
|
|
<dd><code>tvec</code> - Output translation vector.</dd>
|
|
<dd><code>useExtrinsicGuess</code> - Parameter used for REF: SOLVEPNP_ITERATIVE. If true (1), the function uses
|
|
the provided rvec and tvec values as initial approximations of the rotation and translation
|
|
vectors, respectively, and further optimizes them.</dd>
|
|
<dd><code>iterationsCount</code> - Number of iterations.</dd>
|
|
<dd><code>reprojectionError</code> - Inlier threshold value used by the RANSAC procedure. The parameter value
|
|
is the maximum allowed distance between the observed and computed point projections to consider it
|
|
an inlier.</dd>
|
|
<dd><code>confidence</code> - The probability that the algorithm produces a useful result.
|
|
|
|
The function estimates an object pose given a set of object points, their corresponding image
|
|
projections, as well as the camera intrinsic matrix and the distortion coefficients. This function finds such
|
|
a pose that minimizes reprojection error, that is, the sum of squared distances between the observed
|
|
projections imagePoints and the projected (using REF: projectPoints ) objectPoints. The use of RANSAC
|
|
makes the function resistant to outliers.
|
|
|
|
<b>Note:</b>
|
|
<ul>
|
|
<li>
|
|
An example of how to use solvePnPRansac for object detection can be found at
|
|
REF: tutorial_real_time_pose
|
|
</li>
|
|
<li>
|
|
The default method used to estimate the camera pose for the Minimal Sample Sets step
|
|
is #SOLVEPNP_EPNP. Exceptions are:
|
|
<ul>
|
|
<li>
|
|
if you choose #SOLVEPNP_P3P or #SOLVEPNP_AP3P, these methods will be used.
|
|
</li>
|
|
<li>
|
|
if the number of input points is equal to 4, #SOLVEPNP_P3P is used.
|
|
</li>
|
|
</ul>
|
|
<li>
|
|
The method used to estimate the camera pose using all the inliers is defined by the
|
|
flags parameters unless it is equal to #SOLVEPNP_P3P or #SOLVEPNP_AP3P. In this case,
|
|
the method #SOLVEPNP_EPNP will be used instead.
|
|
</li>
|
|
</ul></dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="solvePnPRansac(org.opencv.core.MatOfPoint3f,org.opencv.core.MatOfPoint2f,org.opencv.core.Mat,org.opencv.core.MatOfDouble,org.opencv.core.Mat,org.opencv.core.Mat,boolean,int,float)">
|
|
<h3>solvePnPRansac</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">boolean</span> <span class="element-name">solvePnPRansac</span><wbr><span class="parameters">(<a href="../core/MatOfPoint3f.html" title="class in org.opencv.core">MatOfPoint3f</a> objectPoints,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/MatOfDouble.html" title="class in org.opencv.core">MatOfDouble</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
boolean useExtrinsicGuess,
|
|
int iterationsCount,
|
|
float reprojectionError)</span></div>
|
|
<div class="block">Finds an object pose \( {}^{c}\mathbf{T}_o \) from 3D-2D point correspondences using the RANSAC scheme to deal with bad matches.
|
|
|
|
{ width=50% }
|
|
|
|
SEE: REF: calib3d_solvePnP</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - Array of object points in the object coordinate space, Nx3 1-channel or
|
|
1xN/Nx1 3-channel, where N is the number of points. vector<Point3d> can be also passed here.</dd>
|
|
<dd><code>imagePoints</code> - Array of corresponding image points, Nx2 1-channel or 1xN/Nx1 2-channel,
|
|
where N is the number of points. vector<Point2d> can be also passed here.</dd>
|
|
<dd><code>cameraMatrix</code> - Input camera intrinsic matrix \(\cameramatrix{A}\) .</dd>
|
|
<dd><code>distCoeffs</code> - Input vector of distortion coefficients
|
|
\(\distcoeffs\). If the vector is NULL/empty, the zero distortion coefficients are
|
|
assumed.</dd>
|
|
<dd><code>rvec</code> - Output rotation vector (see REF: Rodrigues ) that, together with tvec, brings points from
|
|
the model coordinate system to the camera coordinate system.</dd>
|
|
<dd><code>tvec</code> - Output translation vector.</dd>
|
|
<dd><code>useExtrinsicGuess</code> - Parameter used for REF: SOLVEPNP_ITERATIVE. If true (1), the function uses
|
|
the provided rvec and tvec values as initial approximations of the rotation and translation
|
|
vectors, respectively, and further optimizes them.</dd>
|
|
<dd><code>iterationsCount</code> - Number of iterations.</dd>
|
|
<dd><code>reprojectionError</code> - Inlier threshold value used by the RANSAC procedure. The parameter value
|
|
is the maximum allowed distance between the observed and computed point projections to consider it
|
|
an inlier.
|
|
|
|
The function estimates an object pose given a set of object points, their corresponding image
|
|
projections, as well as the camera intrinsic matrix and the distortion coefficients. This function finds such
|
|
a pose that minimizes reprojection error, that is, the sum of squared distances between the observed
|
|
projections imagePoints and the projected (using REF: projectPoints ) objectPoints. The use of RANSAC
|
|
makes the function resistant to outliers.
|
|
|
|
<b>Note:</b>
|
|
<ul>
|
|
<li>
|
|
An example of how to use solvePnPRansac for object detection can be found at
|
|
REF: tutorial_real_time_pose
|
|
</li>
|
|
<li>
|
|
The default method used to estimate the camera pose for the Minimal Sample Sets step
|
|
is #SOLVEPNP_EPNP. Exceptions are:
|
|
<ul>
|
|
<li>
|
|
if you choose #SOLVEPNP_P3P or #SOLVEPNP_AP3P, these methods will be used.
|
|
</li>
|
|
<li>
|
|
if the number of input points is equal to 4, #SOLVEPNP_P3P is used.
|
|
</li>
|
|
</ul>
|
|
<li>
|
|
The method used to estimate the camera pose using all the inliers is defined by the
|
|
flags parameters unless it is equal to #SOLVEPNP_P3P or #SOLVEPNP_AP3P. In this case,
|
|
the method #SOLVEPNP_EPNP will be used instead.
|
|
</li>
|
|
</ul></dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="solvePnPRansac(org.opencv.core.MatOfPoint3f,org.opencv.core.MatOfPoint2f,org.opencv.core.Mat,org.opencv.core.MatOfDouble,org.opencv.core.Mat,org.opencv.core.Mat,boolean,int)">
|
|
<h3>solvePnPRansac</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">boolean</span> <span class="element-name">solvePnPRansac</span><wbr><span class="parameters">(<a href="../core/MatOfPoint3f.html" title="class in org.opencv.core">MatOfPoint3f</a> objectPoints,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/MatOfDouble.html" title="class in org.opencv.core">MatOfDouble</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
boolean useExtrinsicGuess,
|
|
int iterationsCount)</span></div>
|
|
<div class="block">Finds an object pose \( {}^{c}\mathbf{T}_o \) from 3D-2D point correspondences using the RANSAC scheme to deal with bad matches.
|
|
|
|
{ width=50% }
|
|
|
|
SEE: REF: calib3d_solvePnP</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - Array of object points in the object coordinate space, Nx3 1-channel or
|
|
1xN/Nx1 3-channel, where N is the number of points. vector<Point3d> can be also passed here.</dd>
|
|
<dd><code>imagePoints</code> - Array of corresponding image points, Nx2 1-channel or 1xN/Nx1 2-channel,
|
|
where N is the number of points. vector<Point2d> can be also passed here.</dd>
|
|
<dd><code>cameraMatrix</code> - Input camera intrinsic matrix \(\cameramatrix{A}\) .</dd>
|
|
<dd><code>distCoeffs</code> - Input vector of distortion coefficients
|
|
\(\distcoeffs\). If the vector is NULL/empty, the zero distortion coefficients are
|
|
assumed.</dd>
|
|
<dd><code>rvec</code> - Output rotation vector (see REF: Rodrigues ) that, together with tvec, brings points from
|
|
the model coordinate system to the camera coordinate system.</dd>
|
|
<dd><code>tvec</code> - Output translation vector.</dd>
|
|
<dd><code>useExtrinsicGuess</code> - Parameter used for REF: SOLVEPNP_ITERATIVE. If true (1), the function uses
|
|
the provided rvec and tvec values as initial approximations of the rotation and translation
|
|
vectors, respectively, and further optimizes them.</dd>
|
|
<dd><code>iterationsCount</code> - Number of iterations.
|
|
is the maximum allowed distance between the observed and computed point projections to consider it
|
|
an inlier.
|
|
|
|
The function estimates an object pose given a set of object points, their corresponding image
|
|
projections, as well as the camera intrinsic matrix and the distortion coefficients. This function finds such
|
|
a pose that minimizes reprojection error, that is, the sum of squared distances between the observed
|
|
projections imagePoints and the projected (using REF: projectPoints ) objectPoints. The use of RANSAC
|
|
makes the function resistant to outliers.
|
|
|
|
<b>Note:</b>
|
|
<ul>
|
|
<li>
|
|
An example of how to use solvePnPRansac for object detection can be found at
|
|
REF: tutorial_real_time_pose
|
|
</li>
|
|
<li>
|
|
The default method used to estimate the camera pose for the Minimal Sample Sets step
|
|
is #SOLVEPNP_EPNP. Exceptions are:
|
|
<ul>
|
|
<li>
|
|
if you choose #SOLVEPNP_P3P or #SOLVEPNP_AP3P, these methods will be used.
|
|
</li>
|
|
<li>
|
|
if the number of input points is equal to 4, #SOLVEPNP_P3P is used.
|
|
</li>
|
|
</ul>
|
|
<li>
|
|
The method used to estimate the camera pose using all the inliers is defined by the
|
|
flags parameters unless it is equal to #SOLVEPNP_P3P or #SOLVEPNP_AP3P. In this case,
|
|
the method #SOLVEPNP_EPNP will be used instead.
|
|
</li>
|
|
</ul></dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="solvePnPRansac(org.opencv.core.MatOfPoint3f,org.opencv.core.MatOfPoint2f,org.opencv.core.Mat,org.opencv.core.MatOfDouble,org.opencv.core.Mat,org.opencv.core.Mat,boolean)">
|
|
<h3>solvePnPRansac</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">boolean</span> <span class="element-name">solvePnPRansac</span><wbr><span class="parameters">(<a href="../core/MatOfPoint3f.html" title="class in org.opencv.core">MatOfPoint3f</a> objectPoints,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/MatOfDouble.html" title="class in org.opencv.core">MatOfDouble</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
boolean useExtrinsicGuess)</span></div>
|
|
<div class="block">Finds an object pose \( {}^{c}\mathbf{T}_o \) from 3D-2D point correspondences using the RANSAC scheme to deal with bad matches.
|
|
|
|
{ width=50% }
|
|
|
|
SEE: REF: calib3d_solvePnP</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - Array of object points in the object coordinate space, Nx3 1-channel or
|
|
1xN/Nx1 3-channel, where N is the number of points. vector<Point3d> can be also passed here.</dd>
|
|
<dd><code>imagePoints</code> - Array of corresponding image points, Nx2 1-channel or 1xN/Nx1 2-channel,
|
|
where N is the number of points. vector<Point2d> can be also passed here.</dd>
|
|
<dd><code>cameraMatrix</code> - Input camera intrinsic matrix \(\cameramatrix{A}\) .</dd>
|
|
<dd><code>distCoeffs</code> - Input vector of distortion coefficients
|
|
\(\distcoeffs\). If the vector is NULL/empty, the zero distortion coefficients are
|
|
assumed.</dd>
|
|
<dd><code>rvec</code> - Output rotation vector (see REF: Rodrigues ) that, together with tvec, brings points from
|
|
the model coordinate system to the camera coordinate system.</dd>
|
|
<dd><code>tvec</code> - Output translation vector.</dd>
|
|
<dd><code>useExtrinsicGuess</code> - Parameter used for REF: SOLVEPNP_ITERATIVE. If true (1), the function uses
|
|
the provided rvec and tvec values as initial approximations of the rotation and translation
|
|
vectors, respectively, and further optimizes them.
|
|
is the maximum allowed distance between the observed and computed point projections to consider it
|
|
an inlier.
|
|
|
|
The function estimates an object pose given a set of object points, their corresponding image
|
|
projections, as well as the camera intrinsic matrix and the distortion coefficients. This function finds such
|
|
a pose that minimizes reprojection error, that is, the sum of squared distances between the observed
|
|
projections imagePoints and the projected (using REF: projectPoints ) objectPoints. The use of RANSAC
|
|
makes the function resistant to outliers.
|
|
|
|
<b>Note:</b>
|
|
<ul>
|
|
<li>
|
|
An example of how to use solvePnPRansac for object detection can be found at
|
|
REF: tutorial_real_time_pose
|
|
</li>
|
|
<li>
|
|
The default method used to estimate the camera pose for the Minimal Sample Sets step
|
|
is #SOLVEPNP_EPNP. Exceptions are:
|
|
<ul>
|
|
<li>
|
|
if you choose #SOLVEPNP_P3P or #SOLVEPNP_AP3P, these methods will be used.
|
|
</li>
|
|
<li>
|
|
if the number of input points is equal to 4, #SOLVEPNP_P3P is used.
|
|
</li>
|
|
</ul>
|
|
<li>
|
|
The method used to estimate the camera pose using all the inliers is defined by the
|
|
flags parameters unless it is equal to #SOLVEPNP_P3P or #SOLVEPNP_AP3P. In this case,
|
|
the method #SOLVEPNP_EPNP will be used instead.
|
|
</li>
|
|
</ul></dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="solvePnPRansac(org.opencv.core.MatOfPoint3f,org.opencv.core.MatOfPoint2f,org.opencv.core.Mat,org.opencv.core.MatOfDouble,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>solvePnPRansac</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">boolean</span> <span class="element-name">solvePnPRansac</span><wbr><span class="parameters">(<a href="../core/MatOfPoint3f.html" title="class in org.opencv.core">MatOfPoint3f</a> objectPoints,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/MatOfDouble.html" title="class in org.opencv.core">MatOfDouble</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec)</span></div>
|
|
<div class="block">Finds an object pose \( {}^{c}\mathbf{T}_o \) from 3D-2D point correspondences using the RANSAC scheme to deal with bad matches.
|
|
|
|
{ width=50% }
|
|
|
|
SEE: REF: calib3d_solvePnP</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - Array of object points in the object coordinate space, Nx3 1-channel or
|
|
1xN/Nx1 3-channel, where N is the number of points. vector<Point3d> can be also passed here.</dd>
|
|
<dd><code>imagePoints</code> - Array of corresponding image points, Nx2 1-channel or 1xN/Nx1 2-channel,
|
|
where N is the number of points. vector<Point2d> can be also passed here.</dd>
|
|
<dd><code>cameraMatrix</code> - Input camera intrinsic matrix \(\cameramatrix{A}\) .</dd>
|
|
<dd><code>distCoeffs</code> - Input vector of distortion coefficients
|
|
\(\distcoeffs\). If the vector is NULL/empty, the zero distortion coefficients are
|
|
assumed.</dd>
|
|
<dd><code>rvec</code> - Output rotation vector (see REF: Rodrigues ) that, together with tvec, brings points from
|
|
the model coordinate system to the camera coordinate system.</dd>
|
|
<dd><code>tvec</code> - Output translation vector.
|
|
the provided rvec and tvec values as initial approximations of the rotation and translation
|
|
vectors, respectively, and further optimizes them.
|
|
is the maximum allowed distance between the observed and computed point projections to consider it
|
|
an inlier.
|
|
|
|
The function estimates an object pose given a set of object points, their corresponding image
|
|
projections, as well as the camera intrinsic matrix and the distortion coefficients. This function finds such
|
|
a pose that minimizes reprojection error, that is, the sum of squared distances between the observed
|
|
projections imagePoints and the projected (using REF: projectPoints ) objectPoints. The use of RANSAC
|
|
makes the function resistant to outliers.
|
|
|
|
<b>Note:</b>
|
|
<ul>
|
|
<li>
|
|
An example of how to use solvePnPRansac for object detection can be found at
|
|
REF: tutorial_real_time_pose
|
|
</li>
|
|
<li>
|
|
The default method used to estimate the camera pose for the Minimal Sample Sets step
|
|
is #SOLVEPNP_EPNP. Exceptions are:
|
|
<ul>
|
|
<li>
|
|
if you choose #SOLVEPNP_P3P or #SOLVEPNP_AP3P, these methods will be used.
|
|
</li>
|
|
<li>
|
|
if the number of input points is equal to 4, #SOLVEPNP_P3P is used.
|
|
</li>
|
|
</ul>
|
|
<li>
|
|
The method used to estimate the camera pose using all the inliers is defined by the
|
|
flags parameters unless it is equal to #SOLVEPNP_P3P or #SOLVEPNP_AP3P. In this case,
|
|
the method #SOLVEPNP_EPNP will be used instead.
|
|
</li>
|
|
</ul></dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="solvePnPRansac(org.opencv.core.MatOfPoint3f,org.opencv.core.MatOfPoint2f,org.opencv.core.Mat,org.opencv.core.MatOfDouble,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.calib3d.UsacParams)">
|
|
<h3>solvePnPRansac</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">boolean</span> <span class="element-name">solvePnPRansac</span><wbr><span class="parameters">(<a href="../core/MatOfPoint3f.html" title="class in org.opencv.core">MatOfPoint3f</a> objectPoints,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/MatOfDouble.html" title="class in org.opencv.core">MatOfDouble</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> inliers,
|
|
<a href="UsacParams.html" title="class in org.opencv.calib3d">UsacParams</a> params)</span></div>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="solvePnPRansac(org.opencv.core.MatOfPoint3f,org.opencv.core.MatOfPoint2f,org.opencv.core.Mat,org.opencv.core.MatOfDouble,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>solvePnPRansac</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">boolean</span> <span class="element-name">solvePnPRansac</span><wbr><span class="parameters">(<a href="../core/MatOfPoint3f.html" title="class in org.opencv.core">MatOfPoint3f</a> objectPoints,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/MatOfDouble.html" title="class in org.opencv.core">MatOfDouble</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> inliers)</span></div>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="solveP3P(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List,int)">
|
|
<h3>solveP3P</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">int</span> <span class="element-name">solveP3P</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs,
|
|
int flags)</span></div>
|
|
<div class="block">Finds an object pose \( {}^{c}\mathbf{T}_o \) from <b>3</b> 3D-2D point correspondences.
|
|
|
|
{ width=50% }
|
|
|
|
SEE: REF: calib3d_solvePnP</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - Array of object points in the object coordinate space, 3x3 1-channel or
|
|
1x3/3x1 3-channel. vector<Point3f> can be also passed here.</dd>
|
|
<dd><code>imagePoints</code> - Array of corresponding image points, 3x2 1-channel or 1x3/3x1 2-channel.
|
|
vector<Point2f> can be also passed here.</dd>
|
|
<dd><code>cameraMatrix</code> - Input camera intrinsic matrix \(\cameramatrix{A}\) .</dd>
|
|
<dd><code>distCoeffs</code> - Input vector of distortion coefficients
|
|
\(\distcoeffs\). If the vector is NULL/empty, the zero distortion coefficients are
|
|
assumed.</dd>
|
|
<dd><code>rvecs</code> - Output rotation vectors (see REF: Rodrigues ) that, together with tvecs, brings points from
|
|
the model coordinate system to the camera coordinate system. A P3P problem has up to 4 solutions.</dd>
|
|
<dd><code>tvecs</code> - Output translation vectors.</dd>
|
|
<dd><code>flags</code> - Method for solving a P3P problem:
|
|
<ul>
|
|
<li>
|
|
REF: SOLVEPNP_P3P Method is based on the paper of X.S. Gao, X.-R. Hou, J. Tang, H.-F. Chang
|
|
"Complete Solution Classification for the Perspective-Three-Point Problem" (CITE: gao2003complete).
|
|
</li>
|
|
<li>
|
|
REF: SOLVEPNP_AP3P Method is based on the paper of T. Ke and S. Roumeliotis.
|
|
"An Efficient Algebraic Solution to the Perspective-Three-Point Problem" (CITE: Ke17).
|
|
</li>
|
|
</ul>
|
|
|
|
The function estimates the object pose given 3 object points, their corresponding image
|
|
projections, as well as the camera intrinsic matrix and the distortion coefficients.
|
|
|
|
<b>Note:</b>
|
|
The solutions are sorted by reprojection errors (lowest to highest).</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="solvePnPRefineLM(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.TermCriteria)">
|
|
<h3>solvePnPRefineLM</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">solvePnPRefineLM</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
<a href="../core/TermCriteria.html" title="class in org.opencv.core">TermCriteria</a> criteria)</span></div>
|
|
<div class="block">Refine a pose (the translation and the rotation that transform a 3D point expressed in the object coordinate frame
|
|
to the camera coordinate frame) from a 3D-2D point correspondences and starting from an initial solution.
|
|
|
|
SEE: REF: calib3d_solvePnP</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - Array of object points in the object coordinate space, Nx3 1-channel or 1xN/Nx1 3-channel,
|
|
where N is the number of points. vector<Point3d> can also be passed here.</dd>
|
|
<dd><code>imagePoints</code> - Array of corresponding image points, Nx2 1-channel or 1xN/Nx1 2-channel,
|
|
where N is the number of points. vector<Point2d> can also be passed here.</dd>
|
|
<dd><code>cameraMatrix</code> - Input camera intrinsic matrix \(\cameramatrix{A}\) .</dd>
|
|
<dd><code>distCoeffs</code> - Input vector of distortion coefficients
|
|
\(\distcoeffs\). If the vector is NULL/empty, the zero distortion coefficients are
|
|
assumed.</dd>
|
|
<dd><code>rvec</code> - Input/Output rotation vector (see REF: Rodrigues ) that, together with tvec, brings points from
|
|
the model coordinate system to the camera coordinate system. Input values are used as an initial solution.</dd>
|
|
<dd><code>tvec</code> - Input/Output translation vector. Input values are used as an initial solution.</dd>
|
|
<dd><code>criteria</code> - Criteria when to stop the Levenberg-Marquard iterative algorithm.
|
|
|
|
The function refines the object pose given at least 3 object points, their corresponding image
|
|
projections, an initial solution for the rotation and translation vector,
|
|
as well as the camera intrinsic matrix and the distortion coefficients.
|
|
The function minimizes the projection error with respect to the rotation and the translation vectors, according
|
|
to a Levenberg-Marquardt iterative minimization CITE: Madsen04 CITE: Eade13 process.</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="solvePnPRefineLM(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>solvePnPRefineLM</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">solvePnPRefineLM</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec)</span></div>
|
|
<div class="block">Refine a pose (the translation and the rotation that transform a 3D point expressed in the object coordinate frame
|
|
to the camera coordinate frame) from a 3D-2D point correspondences and starting from an initial solution.
|
|
|
|
SEE: REF: calib3d_solvePnP</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - Array of object points in the object coordinate space, Nx3 1-channel or 1xN/Nx1 3-channel,
|
|
where N is the number of points. vector<Point3d> can also be passed here.</dd>
|
|
<dd><code>imagePoints</code> - Array of corresponding image points, Nx2 1-channel or 1xN/Nx1 2-channel,
|
|
where N is the number of points. vector<Point2d> can also be passed here.</dd>
|
|
<dd><code>cameraMatrix</code> - Input camera intrinsic matrix \(\cameramatrix{A}\) .</dd>
|
|
<dd><code>distCoeffs</code> - Input vector of distortion coefficients
|
|
\(\distcoeffs\). If the vector is NULL/empty, the zero distortion coefficients are
|
|
assumed.</dd>
|
|
<dd><code>rvec</code> - Input/Output rotation vector (see REF: Rodrigues ) that, together with tvec, brings points from
|
|
the model coordinate system to the camera coordinate system. Input values are used as an initial solution.</dd>
|
|
<dd><code>tvec</code> - Input/Output translation vector. Input values are used as an initial solution.
|
|
|
|
The function refines the object pose given at least 3 object points, their corresponding image
|
|
projections, an initial solution for the rotation and translation vector,
|
|
as well as the camera intrinsic matrix and the distortion coefficients.
|
|
The function minimizes the projection error with respect to the rotation and the translation vectors, according
|
|
to a Levenberg-Marquardt iterative minimization CITE: Madsen04 CITE: Eade13 process.</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="solvePnPRefineVVS(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.TermCriteria,double)">
|
|
<h3>solvePnPRefineVVS</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">solvePnPRefineVVS</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
<a href="../core/TermCriteria.html" title="class in org.opencv.core">TermCriteria</a> criteria,
|
|
double VVSlambda)</span></div>
|
|
<div class="block">Refine a pose (the translation and the rotation that transform a 3D point expressed in the object coordinate frame
|
|
to the camera coordinate frame) from a 3D-2D point correspondences and starting from an initial solution.
|
|
|
|
SEE: REF: calib3d_solvePnP</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - Array of object points in the object coordinate space, Nx3 1-channel or 1xN/Nx1 3-channel,
|
|
where N is the number of points. vector<Point3d> can also be passed here.</dd>
|
|
<dd><code>imagePoints</code> - Array of corresponding image points, Nx2 1-channel or 1xN/Nx1 2-channel,
|
|
where N is the number of points. vector<Point2d> can also be passed here.</dd>
|
|
<dd><code>cameraMatrix</code> - Input camera intrinsic matrix \(\cameramatrix{A}\) .</dd>
|
|
<dd><code>distCoeffs</code> - Input vector of distortion coefficients
|
|
\(\distcoeffs\). If the vector is NULL/empty, the zero distortion coefficients are
|
|
assumed.</dd>
|
|
<dd><code>rvec</code> - Input/Output rotation vector (see REF: Rodrigues ) that, together with tvec, brings points from
|
|
the model coordinate system to the camera coordinate system. Input values are used as an initial solution.</dd>
|
|
<dd><code>tvec</code> - Input/Output translation vector. Input values are used as an initial solution.</dd>
|
|
<dd><code>criteria</code> - Criteria when to stop the Levenberg-Marquard iterative algorithm.</dd>
|
|
<dd><code>VVSlambda</code> - Gain for the virtual visual servoing control law, equivalent to the \(\alpha\)
|
|
gain in the Damped Gauss-Newton formulation.
|
|
|
|
The function refines the object pose given at least 3 object points, their corresponding image
|
|
projections, an initial solution for the rotation and translation vector,
|
|
as well as the camera intrinsic matrix and the distortion coefficients.
|
|
The function minimizes the projection error with respect to the rotation and the translation vectors, using a
|
|
virtual visual servoing (VVS) CITE: Chaumette06 CITE: Marchand16 scheme.</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="solvePnPRefineVVS(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.TermCriteria)">
|
|
<h3>solvePnPRefineVVS</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">solvePnPRefineVVS</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
<a href="../core/TermCriteria.html" title="class in org.opencv.core">TermCriteria</a> criteria)</span></div>
|
|
<div class="block">Refine a pose (the translation and the rotation that transform a 3D point expressed in the object coordinate frame
|
|
to the camera coordinate frame) from a 3D-2D point correspondences and starting from an initial solution.
|
|
|
|
SEE: REF: calib3d_solvePnP</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - Array of object points in the object coordinate space, Nx3 1-channel or 1xN/Nx1 3-channel,
|
|
where N is the number of points. vector<Point3d> can also be passed here.</dd>
|
|
<dd><code>imagePoints</code> - Array of corresponding image points, Nx2 1-channel or 1xN/Nx1 2-channel,
|
|
where N is the number of points. vector<Point2d> can also be passed here.</dd>
|
|
<dd><code>cameraMatrix</code> - Input camera intrinsic matrix \(\cameramatrix{A}\) .</dd>
|
|
<dd><code>distCoeffs</code> - Input vector of distortion coefficients
|
|
\(\distcoeffs\). If the vector is NULL/empty, the zero distortion coefficients are
|
|
assumed.</dd>
|
|
<dd><code>rvec</code> - Input/Output rotation vector (see REF: Rodrigues ) that, together with tvec, brings points from
|
|
the model coordinate system to the camera coordinate system. Input values are used as an initial solution.</dd>
|
|
<dd><code>tvec</code> - Input/Output translation vector. Input values are used as an initial solution.</dd>
|
|
<dd><code>criteria</code> - Criteria when to stop the Levenberg-Marquard iterative algorithm.
|
|
gain in the Damped Gauss-Newton formulation.
|
|
|
|
The function refines the object pose given at least 3 object points, their corresponding image
|
|
projections, an initial solution for the rotation and translation vector,
|
|
as well as the camera intrinsic matrix and the distortion coefficients.
|
|
The function minimizes the projection error with respect to the rotation and the translation vectors, using a
|
|
virtual visual servoing (VVS) CITE: Chaumette06 CITE: Marchand16 scheme.</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="solvePnPRefineVVS(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>solvePnPRefineVVS</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">solvePnPRefineVVS</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec)</span></div>
|
|
<div class="block">Refine a pose (the translation and the rotation that transform a 3D point expressed in the object coordinate frame
|
|
to the camera coordinate frame) from a 3D-2D point correspondences and starting from an initial solution.
|
|
|
|
SEE: REF: calib3d_solvePnP</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - Array of object points in the object coordinate space, Nx3 1-channel or 1xN/Nx1 3-channel,
|
|
where N is the number of points. vector<Point3d> can also be passed here.</dd>
|
|
<dd><code>imagePoints</code> - Array of corresponding image points, Nx2 1-channel or 1xN/Nx1 2-channel,
|
|
where N is the number of points. vector<Point2d> can also be passed here.</dd>
|
|
<dd><code>cameraMatrix</code> - Input camera intrinsic matrix \(\cameramatrix{A}\) .</dd>
|
|
<dd><code>distCoeffs</code> - Input vector of distortion coefficients
|
|
\(\distcoeffs\). If the vector is NULL/empty, the zero distortion coefficients are
|
|
assumed.</dd>
|
|
<dd><code>rvec</code> - Input/Output rotation vector (see REF: Rodrigues ) that, together with tvec, brings points from
|
|
the model coordinate system to the camera coordinate system. Input values are used as an initial solution.</dd>
|
|
<dd><code>tvec</code> - Input/Output translation vector. Input values are used as an initial solution.
|
|
gain in the Damped Gauss-Newton formulation.
|
|
|
|
The function refines the object pose given at least 3 object points, their corresponding image
|
|
projections, an initial solution for the rotation and translation vector,
|
|
as well as the camera intrinsic matrix and the distortion coefficients.
|
|
The function minimizes the projection error with respect to the rotation and the translation vectors, using a
|
|
virtual visual servoing (VVS) CITE: Chaumette06 CITE: Marchand16 scheme.</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="solvePnPGeneric(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List,boolean,int,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>solvePnPGeneric</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">int</span> <span class="element-name">solvePnPGeneric</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs,
|
|
boolean useExtrinsicGuess,
|
|
int flags,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> reprojectionError)</span></div>
|
|
<div class="block">Finds an object pose \( {}^{c}\mathbf{T}_o \) from 3D-2D point correspondences.
|
|
|
|
{ width=50% }
|
|
|
|
SEE: REF: calib3d_solvePnP
|
|
|
|
This function returns a list of all the possible solutions (a solution is a <rotation vector, translation vector>
|
|
couple), depending on the number of input points and the chosen method:
|
|
<ul>
|
|
<li>
|
|
P3P methods (REF: SOLVEPNP_P3P, REF: SOLVEPNP_AP3P): 3 or 4 input points. Number of returned solutions can be between 0 and 4 with 3 input points.
|
|
</li>
|
|
<li>
|
|
REF: SOLVEPNP_IPPE Input points must be >= 4 and object points must be coplanar. Returns 2 solutions.
|
|
</li>
|
|
<li>
|
|
REF: SOLVEPNP_IPPE_SQUARE Special case suitable for marker pose estimation.
|
|
Number of input points must be 4 and 2 solutions are returned. Object points must be defined in the following order:
|
|
<ul>
|
|
<li>
|
|
point 0: [-squareLength / 2, squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 1: [ squareLength / 2, squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 2: [ squareLength / 2, -squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 3: [-squareLength / 2, -squareLength / 2, 0]
|
|
</li>
|
|
</ul>
|
|
<li>
|
|
for all the other flags, number of input points must be >= 4 and object points can be in any configuration.
|
|
Only 1 solution is returned.
|
|
</li>
|
|
</ul></div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - Array of object points in the object coordinate space, Nx3 1-channel or
|
|
1xN/Nx1 3-channel, where N is the number of points. vector<Point3d> can be also passed here.</dd>
|
|
<dd><code>imagePoints</code> - Array of corresponding image points, Nx2 1-channel or 1xN/Nx1 2-channel,
|
|
where N is the number of points. vector<Point2d> can be also passed here.</dd>
|
|
<dd><code>cameraMatrix</code> - Input camera intrinsic matrix \(\cameramatrix{A}\) .</dd>
|
|
<dd><code>distCoeffs</code> - Input vector of distortion coefficients
|
|
\(\distcoeffs\). If the vector is NULL/empty, the zero distortion coefficients are
|
|
assumed.</dd>
|
|
<dd><code>rvecs</code> - Vector of output rotation vectors (see REF: Rodrigues ) that, together with tvecs, brings points from
|
|
the model coordinate system to the camera coordinate system.</dd>
|
|
<dd><code>tvecs</code> - Vector of output translation vectors.</dd>
|
|
<dd><code>useExtrinsicGuess</code> - Parameter used for #SOLVEPNP_ITERATIVE. If true (1), the function uses
|
|
the provided rvec and tvec values as initial approximations of the rotation and translation
|
|
vectors, respectively, and further optimizes them.</dd>
|
|
<dd><code>flags</code> - Method for solving a PnP problem: see REF: calib3d_solvePnP_flags</dd>
|
|
<dd><code>rvec</code> - Rotation vector used to initialize an iterative PnP refinement algorithm, when flag is REF: SOLVEPNP_ITERATIVE
|
|
and useExtrinsicGuess is set to true.</dd>
|
|
<dd><code>tvec</code> - Translation vector used to initialize an iterative PnP refinement algorithm, when flag is REF: SOLVEPNP_ITERATIVE
|
|
and useExtrinsicGuess is set to true.</dd>
|
|
<dd><code>reprojectionError</code> - Optional vector of reprojection error, that is the RMS error
|
|
(\( \text{RMSE} = \sqrt{\frac{\sum_{i}^{N} \left ( \hat{y_i} - y_i \right )^2}{N}} \)) between the input image points
|
|
and the 3D object points projected with the estimated pose.
|
|
|
|
More information is described in REF: calib3d_solvePnP
|
|
|
|
<b>Note:</b>
|
|
<ul>
|
|
<li>
|
|
An example of how to use solvePnP for planar augmented reality can be found at
|
|
opencv_source_code/samples/python/plane_ar.py
|
|
</li>
|
|
<li>
|
|
If you are using Python:
|
|
<ul>
|
|
<li>
|
|
Numpy array slices won't work as input because solvePnP requires contiguous
|
|
arrays (enforced by the assertion using cv::Mat::checkVector() around line 55 of
|
|
modules/calib3d/src/solvepnp.cpp version 2.4.9)
|
|
</li>
|
|
<li>
|
|
The P3P algorithm requires image points to be in an array of shape (N,1,2) due
|
|
to its calling of #undistortPoints (around line 75 of modules/calib3d/src/solvepnp.cpp version 2.4.9)
|
|
which requires 2-channel information.
|
|
</li>
|
|
<li>
|
|
Thus, given some data D = np.array(...) where D.shape = (N,M), in order to use a subset of
|
|
it as, e.g., imagePoints, one must effectively copy it into a new array: imagePoints =
|
|
np.ascontiguousarray(D[:,:2]).reshape((N,1,2))
|
|
</li>
|
|
</ul>
|
|
<li>
|
|
The methods REF: SOLVEPNP_DLS and REF: SOLVEPNP_UPNP cannot be used as the current implementations are
|
|
unstable and sometimes give completely wrong results. If you pass one of these two
|
|
flags, REF: SOLVEPNP_EPNP method will be used instead.
|
|
</li>
|
|
<li>
|
|
The minimum number of points is 4 in the general case. In the case of REF: SOLVEPNP_P3P and REF: SOLVEPNP_AP3P
|
|
methods, it is required to use exactly 4 points (the first 3 points are used to estimate all the solutions
|
|
of the P3P problem, the last one is used to retain the best solution that minimizes the reprojection error).
|
|
</li>
|
|
<li>
|
|
With REF: SOLVEPNP_ITERATIVE method and <code>useExtrinsicGuess=true</code>, the minimum number of points is 3 (3 points
|
|
are sufficient to compute a pose but there are up to 4 solutions). The initial solution should be close to the
|
|
global solution to converge.
|
|
</li>
|
|
<li>
|
|
With REF: SOLVEPNP_IPPE input points must be >= 4 and object points must be coplanar.
|
|
</li>
|
|
<li>
|
|
With REF: SOLVEPNP_IPPE_SQUARE this is a special case suitable for marker pose estimation.
|
|
Number of input points must be 4. Object points must be defined in the following order:
|
|
<ul>
|
|
<li>
|
|
point 0: [-squareLength / 2, squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 1: [ squareLength / 2, squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 2: [ squareLength / 2, -squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 3: [-squareLength / 2, -squareLength / 2, 0]
|
|
</li>
|
|
</ul>
|
|
<li>
|
|
With REF: SOLVEPNP_SQPNP input points must be >= 3
|
|
</li>
|
|
</ul></dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="solvePnPGeneric(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List,boolean,int,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>solvePnPGeneric</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">int</span> <span class="element-name">solvePnPGeneric</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs,
|
|
boolean useExtrinsicGuess,
|
|
int flags,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec)</span></div>
|
|
<div class="block">Finds an object pose \( {}^{c}\mathbf{T}_o \) from 3D-2D point correspondences.
|
|
|
|
{ width=50% }
|
|
|
|
SEE: REF: calib3d_solvePnP
|
|
|
|
This function returns a list of all the possible solutions (a solution is a <rotation vector, translation vector>
|
|
couple), depending on the number of input points and the chosen method:
|
|
<ul>
|
|
<li>
|
|
P3P methods (REF: SOLVEPNP_P3P, REF: SOLVEPNP_AP3P): 3 or 4 input points. Number of returned solutions can be between 0 and 4 with 3 input points.
|
|
</li>
|
|
<li>
|
|
REF: SOLVEPNP_IPPE Input points must be >= 4 and object points must be coplanar. Returns 2 solutions.
|
|
</li>
|
|
<li>
|
|
REF: SOLVEPNP_IPPE_SQUARE Special case suitable for marker pose estimation.
|
|
Number of input points must be 4 and 2 solutions are returned. Object points must be defined in the following order:
|
|
<ul>
|
|
<li>
|
|
point 0: [-squareLength / 2, squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 1: [ squareLength / 2, squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 2: [ squareLength / 2, -squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 3: [-squareLength / 2, -squareLength / 2, 0]
|
|
</li>
|
|
</ul>
|
|
<li>
|
|
for all the other flags, number of input points must be >= 4 and object points can be in any configuration.
|
|
Only 1 solution is returned.
|
|
</li>
|
|
</ul></div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - Array of object points in the object coordinate space, Nx3 1-channel or
|
|
1xN/Nx1 3-channel, where N is the number of points. vector<Point3d> can be also passed here.</dd>
|
|
<dd><code>imagePoints</code> - Array of corresponding image points, Nx2 1-channel or 1xN/Nx1 2-channel,
|
|
where N is the number of points. vector<Point2d> can be also passed here.</dd>
|
|
<dd><code>cameraMatrix</code> - Input camera intrinsic matrix \(\cameramatrix{A}\) .</dd>
|
|
<dd><code>distCoeffs</code> - Input vector of distortion coefficients
|
|
\(\distcoeffs\). If the vector is NULL/empty, the zero distortion coefficients are
|
|
assumed.</dd>
|
|
<dd><code>rvecs</code> - Vector of output rotation vectors (see REF: Rodrigues ) that, together with tvecs, brings points from
|
|
the model coordinate system to the camera coordinate system.</dd>
|
|
<dd><code>tvecs</code> - Vector of output translation vectors.</dd>
|
|
<dd><code>useExtrinsicGuess</code> - Parameter used for #SOLVEPNP_ITERATIVE. If true (1), the function uses
|
|
the provided rvec and tvec values as initial approximations of the rotation and translation
|
|
vectors, respectively, and further optimizes them.</dd>
|
|
<dd><code>flags</code> - Method for solving a PnP problem: see REF: calib3d_solvePnP_flags</dd>
|
|
<dd><code>rvec</code> - Rotation vector used to initialize an iterative PnP refinement algorithm, when flag is REF: SOLVEPNP_ITERATIVE
|
|
and useExtrinsicGuess is set to true.</dd>
|
|
<dd><code>tvec</code> - Translation vector used to initialize an iterative PnP refinement algorithm, when flag is REF: SOLVEPNP_ITERATIVE
|
|
and useExtrinsicGuess is set to true.
|
|
(\( \text{RMSE} = \sqrt{\frac{\sum_{i}^{N} \left ( \hat{y_i} - y_i \right )^2}{N}} \)) between the input image points
|
|
and the 3D object points projected with the estimated pose.
|
|
|
|
More information is described in REF: calib3d_solvePnP
|
|
|
|
<b>Note:</b>
|
|
<ul>
|
|
<li>
|
|
An example of how to use solvePnP for planar augmented reality can be found at
|
|
opencv_source_code/samples/python/plane_ar.py
|
|
</li>
|
|
<li>
|
|
If you are using Python:
|
|
<ul>
|
|
<li>
|
|
Numpy array slices won't work as input because solvePnP requires contiguous
|
|
arrays (enforced by the assertion using cv::Mat::checkVector() around line 55 of
|
|
modules/calib3d/src/solvepnp.cpp version 2.4.9)
|
|
</li>
|
|
<li>
|
|
The P3P algorithm requires image points to be in an array of shape (N,1,2) due
|
|
to its calling of #undistortPoints (around line 75 of modules/calib3d/src/solvepnp.cpp version 2.4.9)
|
|
which requires 2-channel information.
|
|
</li>
|
|
<li>
|
|
Thus, given some data D = np.array(...) where D.shape = (N,M), in order to use a subset of
|
|
it as, e.g., imagePoints, one must effectively copy it into a new array: imagePoints =
|
|
np.ascontiguousarray(D[:,:2]).reshape((N,1,2))
|
|
</li>
|
|
</ul>
|
|
<li>
|
|
The methods REF: SOLVEPNP_DLS and REF: SOLVEPNP_UPNP cannot be used as the current implementations are
|
|
unstable and sometimes give completely wrong results. If you pass one of these two
|
|
flags, REF: SOLVEPNP_EPNP method will be used instead.
|
|
</li>
|
|
<li>
|
|
The minimum number of points is 4 in the general case. In the case of REF: SOLVEPNP_P3P and REF: SOLVEPNP_AP3P
|
|
methods, it is required to use exactly 4 points (the first 3 points are used to estimate all the solutions
|
|
of the P3P problem, the last one is used to retain the best solution that minimizes the reprojection error).
|
|
</li>
|
|
<li>
|
|
With REF: SOLVEPNP_ITERATIVE method and <code>useExtrinsicGuess=true</code>, the minimum number of points is 3 (3 points
|
|
are sufficient to compute a pose but there are up to 4 solutions). The initial solution should be close to the
|
|
global solution to converge.
|
|
</li>
|
|
<li>
|
|
With REF: SOLVEPNP_IPPE input points must be >= 4 and object points must be coplanar.
|
|
</li>
|
|
<li>
|
|
With REF: SOLVEPNP_IPPE_SQUARE this is a special case suitable for marker pose estimation.
|
|
Number of input points must be 4. Object points must be defined in the following order:
|
|
<ul>
|
|
<li>
|
|
point 0: [-squareLength / 2, squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 1: [ squareLength / 2, squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 2: [ squareLength / 2, -squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 3: [-squareLength / 2, -squareLength / 2, 0]
|
|
</li>
|
|
</ul>
|
|
<li>
|
|
With REF: SOLVEPNP_SQPNP input points must be >= 3
|
|
</li>
|
|
</ul></dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="solvePnPGeneric(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List,boolean,int,org.opencv.core.Mat)">
|
|
<h3>solvePnPGeneric</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">int</span> <span class="element-name">solvePnPGeneric</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs,
|
|
boolean useExtrinsicGuess,
|
|
int flags,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec)</span></div>
|
|
<div class="block">Finds an object pose \( {}^{c}\mathbf{T}_o \) from 3D-2D point correspondences.
|
|
|
|
{ width=50% }
|
|
|
|
SEE: REF: calib3d_solvePnP
|
|
|
|
This function returns a list of all the possible solutions (a solution is a <rotation vector, translation vector>
|
|
couple), depending on the number of input points and the chosen method:
|
|
<ul>
|
|
<li>
|
|
P3P methods (REF: SOLVEPNP_P3P, REF: SOLVEPNP_AP3P): 3 or 4 input points. Number of returned solutions can be between 0 and 4 with 3 input points.
|
|
</li>
|
|
<li>
|
|
REF: SOLVEPNP_IPPE Input points must be >= 4 and object points must be coplanar. Returns 2 solutions.
|
|
</li>
|
|
<li>
|
|
REF: SOLVEPNP_IPPE_SQUARE Special case suitable for marker pose estimation.
|
|
Number of input points must be 4 and 2 solutions are returned. Object points must be defined in the following order:
|
|
<ul>
|
|
<li>
|
|
point 0: [-squareLength / 2, squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 1: [ squareLength / 2, squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 2: [ squareLength / 2, -squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 3: [-squareLength / 2, -squareLength / 2, 0]
|
|
</li>
|
|
</ul>
|
|
<li>
|
|
for all the other flags, number of input points must be >= 4 and object points can be in any configuration.
|
|
Only 1 solution is returned.
|
|
</li>
|
|
</ul></div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - Array of object points in the object coordinate space, Nx3 1-channel or
|
|
1xN/Nx1 3-channel, where N is the number of points. vector<Point3d> can be also passed here.</dd>
|
|
<dd><code>imagePoints</code> - Array of corresponding image points, Nx2 1-channel or 1xN/Nx1 2-channel,
|
|
where N is the number of points. vector<Point2d> can be also passed here.</dd>
|
|
<dd><code>cameraMatrix</code> - Input camera intrinsic matrix \(\cameramatrix{A}\) .</dd>
|
|
<dd><code>distCoeffs</code> - Input vector of distortion coefficients
|
|
\(\distcoeffs\). If the vector is NULL/empty, the zero distortion coefficients are
|
|
assumed.</dd>
|
|
<dd><code>rvecs</code> - Vector of output rotation vectors (see REF: Rodrigues ) that, together with tvecs, brings points from
|
|
the model coordinate system to the camera coordinate system.</dd>
|
|
<dd><code>tvecs</code> - Vector of output translation vectors.</dd>
|
|
<dd><code>useExtrinsicGuess</code> - Parameter used for #SOLVEPNP_ITERATIVE. If true (1), the function uses
|
|
the provided rvec and tvec values as initial approximations of the rotation and translation
|
|
vectors, respectively, and further optimizes them.</dd>
|
|
<dd><code>flags</code> - Method for solving a PnP problem: see REF: calib3d_solvePnP_flags</dd>
|
|
<dd><code>rvec</code> - Rotation vector used to initialize an iterative PnP refinement algorithm, when flag is REF: SOLVEPNP_ITERATIVE
|
|
and useExtrinsicGuess is set to true.
|
|
and useExtrinsicGuess is set to true.
|
|
(\( \text{RMSE} = \sqrt{\frac{\sum_{i}^{N} \left ( \hat{y_i} - y_i \right )^2}{N}} \)) between the input image points
|
|
and the 3D object points projected with the estimated pose.
|
|
|
|
More information is described in REF: calib3d_solvePnP
|
|
|
|
<b>Note:</b>
|
|
<ul>
|
|
<li>
|
|
An example of how to use solvePnP for planar augmented reality can be found at
|
|
opencv_source_code/samples/python/plane_ar.py
|
|
</li>
|
|
<li>
|
|
If you are using Python:
|
|
<ul>
|
|
<li>
|
|
Numpy array slices won't work as input because solvePnP requires contiguous
|
|
arrays (enforced by the assertion using cv::Mat::checkVector() around line 55 of
|
|
modules/calib3d/src/solvepnp.cpp version 2.4.9)
|
|
</li>
|
|
<li>
|
|
The P3P algorithm requires image points to be in an array of shape (N,1,2) due
|
|
to its calling of #undistortPoints (around line 75 of modules/calib3d/src/solvepnp.cpp version 2.4.9)
|
|
which requires 2-channel information.
|
|
</li>
|
|
<li>
|
|
Thus, given some data D = np.array(...) where D.shape = (N,M), in order to use a subset of
|
|
it as, e.g., imagePoints, one must effectively copy it into a new array: imagePoints =
|
|
np.ascontiguousarray(D[:,:2]).reshape((N,1,2))
|
|
</li>
|
|
</ul>
|
|
<li>
|
|
The methods REF: SOLVEPNP_DLS and REF: SOLVEPNP_UPNP cannot be used as the current implementations are
|
|
unstable and sometimes give completely wrong results. If you pass one of these two
|
|
flags, REF: SOLVEPNP_EPNP method will be used instead.
|
|
</li>
|
|
<li>
|
|
The minimum number of points is 4 in the general case. In the case of REF: SOLVEPNP_P3P and REF: SOLVEPNP_AP3P
|
|
methods, it is required to use exactly 4 points (the first 3 points are used to estimate all the solutions
|
|
of the P3P problem, the last one is used to retain the best solution that minimizes the reprojection error).
|
|
</li>
|
|
<li>
|
|
With REF: SOLVEPNP_ITERATIVE method and <code>useExtrinsicGuess=true</code>, the minimum number of points is 3 (3 points
|
|
are sufficient to compute a pose but there are up to 4 solutions). The initial solution should be close to the
|
|
global solution to converge.
|
|
</li>
|
|
<li>
|
|
With REF: SOLVEPNP_IPPE input points must be >= 4 and object points must be coplanar.
|
|
</li>
|
|
<li>
|
|
With REF: SOLVEPNP_IPPE_SQUARE this is a special case suitable for marker pose estimation.
|
|
Number of input points must be 4. Object points must be defined in the following order:
|
|
<ul>
|
|
<li>
|
|
point 0: [-squareLength / 2, squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 1: [ squareLength / 2, squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 2: [ squareLength / 2, -squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 3: [-squareLength / 2, -squareLength / 2, 0]
|
|
</li>
|
|
</ul>
|
|
<li>
|
|
With REF: SOLVEPNP_SQPNP input points must be >= 3
|
|
</li>
|
|
</ul></dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="solvePnPGeneric(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List,boolean,int)">
|
|
<h3>solvePnPGeneric</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">int</span> <span class="element-name">solvePnPGeneric</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs,
|
|
boolean useExtrinsicGuess,
|
|
int flags)</span></div>
|
|
<div class="block">Finds an object pose \( {}^{c}\mathbf{T}_o \) from 3D-2D point correspondences.
|
|
|
|
{ width=50% }
|
|
|
|
SEE: REF: calib3d_solvePnP
|
|
|
|
This function returns a list of all the possible solutions (a solution is a <rotation vector, translation vector>
|
|
couple), depending on the number of input points and the chosen method:
|
|
<ul>
|
|
<li>
|
|
P3P methods (REF: SOLVEPNP_P3P, REF: SOLVEPNP_AP3P): 3 or 4 input points. Number of returned solutions can be between 0 and 4 with 3 input points.
|
|
</li>
|
|
<li>
|
|
REF: SOLVEPNP_IPPE Input points must be >= 4 and object points must be coplanar. Returns 2 solutions.
|
|
</li>
|
|
<li>
|
|
REF: SOLVEPNP_IPPE_SQUARE Special case suitable for marker pose estimation.
|
|
Number of input points must be 4 and 2 solutions are returned. Object points must be defined in the following order:
|
|
<ul>
|
|
<li>
|
|
point 0: [-squareLength / 2, squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 1: [ squareLength / 2, squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 2: [ squareLength / 2, -squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 3: [-squareLength / 2, -squareLength / 2, 0]
|
|
</li>
|
|
</ul>
|
|
<li>
|
|
for all the other flags, number of input points must be >= 4 and object points can be in any configuration.
|
|
Only 1 solution is returned.
|
|
</li>
|
|
</ul></div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - Array of object points in the object coordinate space, Nx3 1-channel or
|
|
1xN/Nx1 3-channel, where N is the number of points. vector<Point3d> can be also passed here.</dd>
|
|
<dd><code>imagePoints</code> - Array of corresponding image points, Nx2 1-channel or 1xN/Nx1 2-channel,
|
|
where N is the number of points. vector<Point2d> can be also passed here.</dd>
|
|
<dd><code>cameraMatrix</code> - Input camera intrinsic matrix \(\cameramatrix{A}\) .</dd>
|
|
<dd><code>distCoeffs</code> - Input vector of distortion coefficients
|
|
\(\distcoeffs\). If the vector is NULL/empty, the zero distortion coefficients are
|
|
assumed.</dd>
|
|
<dd><code>rvecs</code> - Vector of output rotation vectors (see REF: Rodrigues ) that, together with tvecs, brings points from
|
|
the model coordinate system to the camera coordinate system.</dd>
|
|
<dd><code>tvecs</code> - Vector of output translation vectors.</dd>
|
|
<dd><code>useExtrinsicGuess</code> - Parameter used for #SOLVEPNP_ITERATIVE. If true (1), the function uses
|
|
the provided rvec and tvec values as initial approximations of the rotation and translation
|
|
vectors, respectively, and further optimizes them.</dd>
|
|
<dd><code>flags</code> - Method for solving a PnP problem: see REF: calib3d_solvePnP_flags
|
|
and useExtrinsicGuess is set to true.
|
|
and useExtrinsicGuess is set to true.
|
|
(\( \text{RMSE} = \sqrt{\frac{\sum_{i}^{N} \left ( \hat{y_i} - y_i \right )^2}{N}} \)) between the input image points
|
|
and the 3D object points projected with the estimated pose.
|
|
|
|
More information is described in REF: calib3d_solvePnP
|
|
|
|
<b>Note:</b>
|
|
<ul>
|
|
<li>
|
|
An example of how to use solvePnP for planar augmented reality can be found at
|
|
opencv_source_code/samples/python/plane_ar.py
|
|
</li>
|
|
<li>
|
|
If you are using Python:
|
|
<ul>
|
|
<li>
|
|
Numpy array slices won't work as input because solvePnP requires contiguous
|
|
arrays (enforced by the assertion using cv::Mat::checkVector() around line 55 of
|
|
modules/calib3d/src/solvepnp.cpp version 2.4.9)
|
|
</li>
|
|
<li>
|
|
The P3P algorithm requires image points to be in an array of shape (N,1,2) due
|
|
to its calling of #undistortPoints (around line 75 of modules/calib3d/src/solvepnp.cpp version 2.4.9)
|
|
which requires 2-channel information.
|
|
</li>
|
|
<li>
|
|
Thus, given some data D = np.array(...) where D.shape = (N,M), in order to use a subset of
|
|
it as, e.g., imagePoints, one must effectively copy it into a new array: imagePoints =
|
|
np.ascontiguousarray(D[:,:2]).reshape((N,1,2))
|
|
</li>
|
|
</ul>
|
|
<li>
|
|
The methods REF: SOLVEPNP_DLS and REF: SOLVEPNP_UPNP cannot be used as the current implementations are
|
|
unstable and sometimes give completely wrong results. If you pass one of these two
|
|
flags, REF: SOLVEPNP_EPNP method will be used instead.
|
|
</li>
|
|
<li>
|
|
The minimum number of points is 4 in the general case. In the case of REF: SOLVEPNP_P3P and REF: SOLVEPNP_AP3P
|
|
methods, it is required to use exactly 4 points (the first 3 points are used to estimate all the solutions
|
|
of the P3P problem, the last one is used to retain the best solution that minimizes the reprojection error).
|
|
</li>
|
|
<li>
|
|
With REF: SOLVEPNP_ITERATIVE method and <code>useExtrinsicGuess=true</code>, the minimum number of points is 3 (3 points
|
|
are sufficient to compute a pose but there are up to 4 solutions). The initial solution should be close to the
|
|
global solution to converge.
|
|
</li>
|
|
<li>
|
|
With REF: SOLVEPNP_IPPE input points must be >= 4 and object points must be coplanar.
|
|
</li>
|
|
<li>
|
|
With REF: SOLVEPNP_IPPE_SQUARE this is a special case suitable for marker pose estimation.
|
|
Number of input points must be 4. Object points must be defined in the following order:
|
|
<ul>
|
|
<li>
|
|
point 0: [-squareLength / 2, squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 1: [ squareLength / 2, squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 2: [ squareLength / 2, -squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 3: [-squareLength / 2, -squareLength / 2, 0]
|
|
</li>
|
|
</ul>
|
|
<li>
|
|
With REF: SOLVEPNP_SQPNP input points must be >= 3
|
|
</li>
|
|
</ul></dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="solvePnPGeneric(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List,boolean)">
|
|
<h3>solvePnPGeneric</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">int</span> <span class="element-name">solvePnPGeneric</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs,
|
|
boolean useExtrinsicGuess)</span></div>
|
|
<div class="block">Finds an object pose \( {}^{c}\mathbf{T}_o \) from 3D-2D point correspondences.
|
|
|
|
{ width=50% }
|
|
|
|
SEE: REF: calib3d_solvePnP
|
|
|
|
This function returns a list of all the possible solutions (a solution is a <rotation vector, translation vector>
|
|
couple), depending on the number of input points and the chosen method:
|
|
<ul>
|
|
<li>
|
|
P3P methods (REF: SOLVEPNP_P3P, REF: SOLVEPNP_AP3P): 3 or 4 input points. Number of returned solutions can be between 0 and 4 with 3 input points.
|
|
</li>
|
|
<li>
|
|
REF: SOLVEPNP_IPPE Input points must be >= 4 and object points must be coplanar. Returns 2 solutions.
|
|
</li>
|
|
<li>
|
|
REF: SOLVEPNP_IPPE_SQUARE Special case suitable for marker pose estimation.
|
|
Number of input points must be 4 and 2 solutions are returned. Object points must be defined in the following order:
|
|
<ul>
|
|
<li>
|
|
point 0: [-squareLength / 2, squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 1: [ squareLength / 2, squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 2: [ squareLength / 2, -squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 3: [-squareLength / 2, -squareLength / 2, 0]
|
|
</li>
|
|
</ul>
|
|
<li>
|
|
for all the other flags, number of input points must be >= 4 and object points can be in any configuration.
|
|
Only 1 solution is returned.
|
|
</li>
|
|
</ul></div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - Array of object points in the object coordinate space, Nx3 1-channel or
|
|
1xN/Nx1 3-channel, where N is the number of points. vector<Point3d> can be also passed here.</dd>
|
|
<dd><code>imagePoints</code> - Array of corresponding image points, Nx2 1-channel or 1xN/Nx1 2-channel,
|
|
where N is the number of points. vector<Point2d> can be also passed here.</dd>
|
|
<dd><code>cameraMatrix</code> - Input camera intrinsic matrix \(\cameramatrix{A}\) .</dd>
|
|
<dd><code>distCoeffs</code> - Input vector of distortion coefficients
|
|
\(\distcoeffs\). If the vector is NULL/empty, the zero distortion coefficients are
|
|
assumed.</dd>
|
|
<dd><code>rvecs</code> - Vector of output rotation vectors (see REF: Rodrigues ) that, together with tvecs, brings points from
|
|
the model coordinate system to the camera coordinate system.</dd>
|
|
<dd><code>tvecs</code> - Vector of output translation vectors.</dd>
|
|
<dd><code>useExtrinsicGuess</code> - Parameter used for #SOLVEPNP_ITERATIVE. If true (1), the function uses
|
|
the provided rvec and tvec values as initial approximations of the rotation and translation
|
|
vectors, respectively, and further optimizes them.
|
|
and useExtrinsicGuess is set to true.
|
|
and useExtrinsicGuess is set to true.
|
|
(\( \text{RMSE} = \sqrt{\frac{\sum_{i}^{N} \left ( \hat{y_i} - y_i \right )^2}{N}} \)) between the input image points
|
|
and the 3D object points projected with the estimated pose.
|
|
|
|
More information is described in REF: calib3d_solvePnP
|
|
|
|
<b>Note:</b>
|
|
<ul>
|
|
<li>
|
|
An example of how to use solvePnP for planar augmented reality can be found at
|
|
opencv_source_code/samples/python/plane_ar.py
|
|
</li>
|
|
<li>
|
|
If you are using Python:
|
|
<ul>
|
|
<li>
|
|
Numpy array slices won't work as input because solvePnP requires contiguous
|
|
arrays (enforced by the assertion using cv::Mat::checkVector() around line 55 of
|
|
modules/calib3d/src/solvepnp.cpp version 2.4.9)
|
|
</li>
|
|
<li>
|
|
The P3P algorithm requires image points to be in an array of shape (N,1,2) due
|
|
to its calling of #undistortPoints (around line 75 of modules/calib3d/src/solvepnp.cpp version 2.4.9)
|
|
which requires 2-channel information.
|
|
</li>
|
|
<li>
|
|
Thus, given some data D = np.array(...) where D.shape = (N,M), in order to use a subset of
|
|
it as, e.g., imagePoints, one must effectively copy it into a new array: imagePoints =
|
|
np.ascontiguousarray(D[:,:2]).reshape((N,1,2))
|
|
</li>
|
|
</ul>
|
|
<li>
|
|
The methods REF: SOLVEPNP_DLS and REF: SOLVEPNP_UPNP cannot be used as the current implementations are
|
|
unstable and sometimes give completely wrong results. If you pass one of these two
|
|
flags, REF: SOLVEPNP_EPNP method will be used instead.
|
|
</li>
|
|
<li>
|
|
The minimum number of points is 4 in the general case. In the case of REF: SOLVEPNP_P3P and REF: SOLVEPNP_AP3P
|
|
methods, it is required to use exactly 4 points (the first 3 points are used to estimate all the solutions
|
|
of the P3P problem, the last one is used to retain the best solution that minimizes the reprojection error).
|
|
</li>
|
|
<li>
|
|
With REF: SOLVEPNP_ITERATIVE method and <code>useExtrinsicGuess=true</code>, the minimum number of points is 3 (3 points
|
|
are sufficient to compute a pose but there are up to 4 solutions). The initial solution should be close to the
|
|
global solution to converge.
|
|
</li>
|
|
<li>
|
|
With REF: SOLVEPNP_IPPE input points must be >= 4 and object points must be coplanar.
|
|
</li>
|
|
<li>
|
|
With REF: SOLVEPNP_IPPE_SQUARE this is a special case suitable for marker pose estimation.
|
|
Number of input points must be 4. Object points must be defined in the following order:
|
|
<ul>
|
|
<li>
|
|
point 0: [-squareLength / 2, squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 1: [ squareLength / 2, squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 2: [ squareLength / 2, -squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 3: [-squareLength / 2, -squareLength / 2, 0]
|
|
</li>
|
|
</ul>
|
|
<li>
|
|
With REF: SOLVEPNP_SQPNP input points must be >= 3
|
|
</li>
|
|
</ul></dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="solvePnPGeneric(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List)">
|
|
<h3>solvePnPGeneric</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">int</span> <span class="element-name">solvePnPGeneric</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs)</span></div>
|
|
<div class="block">Finds an object pose \( {}^{c}\mathbf{T}_o \) from 3D-2D point correspondences.
|
|
|
|
{ width=50% }
|
|
|
|
SEE: REF: calib3d_solvePnP
|
|
|
|
This function returns a list of all the possible solutions (a solution is a <rotation vector, translation vector>
|
|
couple), depending on the number of input points and the chosen method:
|
|
<ul>
|
|
<li>
|
|
P3P methods (REF: SOLVEPNP_P3P, REF: SOLVEPNP_AP3P): 3 or 4 input points. Number of returned solutions can be between 0 and 4 with 3 input points.
|
|
</li>
|
|
<li>
|
|
REF: SOLVEPNP_IPPE Input points must be >= 4 and object points must be coplanar. Returns 2 solutions.
|
|
</li>
|
|
<li>
|
|
REF: SOLVEPNP_IPPE_SQUARE Special case suitable for marker pose estimation.
|
|
Number of input points must be 4 and 2 solutions are returned. Object points must be defined in the following order:
|
|
<ul>
|
|
<li>
|
|
point 0: [-squareLength / 2, squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 1: [ squareLength / 2, squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 2: [ squareLength / 2, -squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 3: [-squareLength / 2, -squareLength / 2, 0]
|
|
</li>
|
|
</ul>
|
|
<li>
|
|
for all the other flags, number of input points must be >= 4 and object points can be in any configuration.
|
|
Only 1 solution is returned.
|
|
</li>
|
|
</ul></div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - Array of object points in the object coordinate space, Nx3 1-channel or
|
|
1xN/Nx1 3-channel, where N is the number of points. vector<Point3d> can be also passed here.</dd>
|
|
<dd><code>imagePoints</code> - Array of corresponding image points, Nx2 1-channel or 1xN/Nx1 2-channel,
|
|
where N is the number of points. vector<Point2d> can be also passed here.</dd>
|
|
<dd><code>cameraMatrix</code> - Input camera intrinsic matrix \(\cameramatrix{A}\) .</dd>
|
|
<dd><code>distCoeffs</code> - Input vector of distortion coefficients
|
|
\(\distcoeffs\). If the vector is NULL/empty, the zero distortion coefficients are
|
|
assumed.</dd>
|
|
<dd><code>rvecs</code> - Vector of output rotation vectors (see REF: Rodrigues ) that, together with tvecs, brings points from
|
|
the model coordinate system to the camera coordinate system.</dd>
|
|
<dd><code>tvecs</code> - Vector of output translation vectors.
|
|
the provided rvec and tvec values as initial approximations of the rotation and translation
|
|
vectors, respectively, and further optimizes them.
|
|
and useExtrinsicGuess is set to true.
|
|
and useExtrinsicGuess is set to true.
|
|
(\( \text{RMSE} = \sqrt{\frac{\sum_{i}^{N} \left ( \hat{y_i} - y_i \right )^2}{N}} \)) between the input image points
|
|
and the 3D object points projected with the estimated pose.
|
|
|
|
More information is described in REF: calib3d_solvePnP
|
|
|
|
<b>Note:</b>
|
|
<ul>
|
|
<li>
|
|
An example of how to use solvePnP for planar augmented reality can be found at
|
|
opencv_source_code/samples/python/plane_ar.py
|
|
</li>
|
|
<li>
|
|
If you are using Python:
|
|
<ul>
|
|
<li>
|
|
Numpy array slices won't work as input because solvePnP requires contiguous
|
|
arrays (enforced by the assertion using cv::Mat::checkVector() around line 55 of
|
|
modules/calib3d/src/solvepnp.cpp version 2.4.9)
|
|
</li>
|
|
<li>
|
|
The P3P algorithm requires image points to be in an array of shape (N,1,2) due
|
|
to its calling of #undistortPoints (around line 75 of modules/calib3d/src/solvepnp.cpp version 2.4.9)
|
|
which requires 2-channel information.
|
|
</li>
|
|
<li>
|
|
Thus, given some data D = np.array(...) where D.shape = (N,M), in order to use a subset of
|
|
it as, e.g., imagePoints, one must effectively copy it into a new array: imagePoints =
|
|
np.ascontiguousarray(D[:,:2]).reshape((N,1,2))
|
|
</li>
|
|
</ul>
|
|
<li>
|
|
The methods REF: SOLVEPNP_DLS and REF: SOLVEPNP_UPNP cannot be used as the current implementations are
|
|
unstable and sometimes give completely wrong results. If you pass one of these two
|
|
flags, REF: SOLVEPNP_EPNP method will be used instead.
|
|
</li>
|
|
<li>
|
|
The minimum number of points is 4 in the general case. In the case of REF: SOLVEPNP_P3P and REF: SOLVEPNP_AP3P
|
|
methods, it is required to use exactly 4 points (the first 3 points are used to estimate all the solutions
|
|
of the P3P problem, the last one is used to retain the best solution that minimizes the reprojection error).
|
|
</li>
|
|
<li>
|
|
With REF: SOLVEPNP_ITERATIVE method and <code>useExtrinsicGuess=true</code>, the minimum number of points is 3 (3 points
|
|
are sufficient to compute a pose but there are up to 4 solutions). The initial solution should be close to the
|
|
global solution to converge.
|
|
</li>
|
|
<li>
|
|
With REF: SOLVEPNP_IPPE input points must be >= 4 and object points must be coplanar.
|
|
</li>
|
|
<li>
|
|
With REF: SOLVEPNP_IPPE_SQUARE this is a special case suitable for marker pose estimation.
|
|
Number of input points must be 4. Object points must be defined in the following order:
|
|
<ul>
|
|
<li>
|
|
point 0: [-squareLength / 2, squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 1: [ squareLength / 2, squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 2: [ squareLength / 2, -squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 3: [-squareLength / 2, -squareLength / 2, 0]
|
|
</li>
|
|
</ul>
|
|
<li>
|
|
With REF: SOLVEPNP_SQPNP input points must be >= 3
|
|
</li>
|
|
</ul></dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="initCameraMatrix2D(java.util.List,java.util.List,org.opencv.core.Size,double)">
|
|
<h3>initCameraMatrix2D</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">initCameraMatrix2D</span><wbr><span class="parameters">(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/MatOfPoint3f.html" title="class in org.opencv.core">MatOfPoint3f</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a>> imagePoints,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
double aspectRatio)</span></div>
|
|
<div class="block">Finds an initial camera intrinsic matrix from 3D-2D point correspondences.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - Vector of vectors of the calibration pattern points in the calibration pattern
|
|
coordinate space. In the old interface all the per-view vectors are concatenated. See
|
|
#calibrateCamera for details.</dd>
|
|
<dd><code>imagePoints</code> - Vector of vectors of the projections of the calibration pattern points. In the
|
|
old interface all the per-view vectors are concatenated.</dd>
|
|
<dd><code>imageSize</code> - Image size in pixels used to initialize the principal point.</dd>
|
|
<dd><code>aspectRatio</code> - If it is zero or negative, both \(f_x\) and \(f_y\) are estimated independently.
|
|
Otherwise, \(f_x = f_y \cdot \texttt{aspectRatio}\) .
|
|
|
|
The function estimates and returns an initial camera intrinsic matrix for the camera calibration process.
|
|
Currently, the function only supports planar calibration patterns, which are patterns where each
|
|
object point has z-coordinate =0.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="initCameraMatrix2D(java.util.List,java.util.List,org.opencv.core.Size)">
|
|
<h3>initCameraMatrix2D</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">initCameraMatrix2D</span><wbr><span class="parameters">(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/MatOfPoint3f.html" title="class in org.opencv.core">MatOfPoint3f</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a>> imagePoints,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize)</span></div>
|
|
<div class="block">Finds an initial camera intrinsic matrix from 3D-2D point correspondences.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - Vector of vectors of the calibration pattern points in the calibration pattern
|
|
coordinate space. In the old interface all the per-view vectors are concatenated. See
|
|
#calibrateCamera for details.</dd>
|
|
<dd><code>imagePoints</code> - Vector of vectors of the projections of the calibration pattern points. In the
|
|
old interface all the per-view vectors are concatenated.</dd>
|
|
<dd><code>imageSize</code> - Image size in pixels used to initialize the principal point.
|
|
Otherwise, \(f_x = f_y \cdot \texttt{aspectRatio}\) .
|
|
|
|
The function estimates and returns an initial camera intrinsic matrix for the camera calibration process.
|
|
Currently, the function only supports planar calibration patterns, which are patterns where each
|
|
object point has z-coordinate =0.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="findChessboardCorners(org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.MatOfPoint2f,int)">
|
|
<h3>findChessboardCorners</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">boolean</span> <span class="element-name">findChessboardCorners</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> image,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> patternSize,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> corners,
|
|
int flags)</span></div>
|
|
<div class="block">Finds the positions of internal corners of the chessboard.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>image</code> - Source chessboard view. It must be an 8-bit grayscale or color image.</dd>
|
|
<dd><code>patternSize</code> - Number of inner corners per a chessboard row and column
|
|
( patternSize = cv::Size(points_per_row,points_per_colum) = cv::Size(columns,rows) ).</dd>
|
|
<dd><code>corners</code> - Output array of detected corners.</dd>
|
|
<dd><code>flags</code> - Various operation flags that can be zero or a combination of the following values:
|
|
<ul>
|
|
<li>
|
|
REF: CALIB_CB_ADAPTIVE_THRESH Use adaptive thresholding to convert the image to black
|
|
and white, rather than a fixed threshold level (computed from the average image brightness).
|
|
</li>
|
|
<li>
|
|
REF: CALIB_CB_NORMALIZE_IMAGE Normalize the image gamma with #equalizeHist before
|
|
applying fixed or adaptive thresholding.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_CB_FILTER_QUADS Use additional criteria (like contour area, perimeter,
|
|
square-like shape) to filter out false quads extracted at the contour retrieval stage.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_CB_FAST_CHECK Run a fast check on the image that looks for chessboard corners,
|
|
and shortcut the call if none is found. This can drastically speed up the call in the
|
|
degenerate condition when no chessboard is observed.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_CB_PLAIN All other flags are ignored. The input image is taken as is.
|
|
No image processing is done to improve to find the checkerboard. This has the effect of speeding up the
|
|
execution of the function but could lead to not recognizing the checkerboard if the image
|
|
is not previously binarized in the appropriate manner.
|
|
</li>
|
|
</ul>
|
|
|
|
The function attempts to determine whether the input image is a view of the chessboard pattern and
|
|
locate the internal chessboard corners. The function returns a non-zero value if all of the corners
|
|
are found and they are placed in a certain order (row by row, left to right in every row).
|
|
Otherwise, if the function fails to find all the corners or reorder them, it returns 0. For example,
|
|
a regular chessboard has 8 x 8 squares and 7 x 7 internal corners, that is, points where the black
|
|
squares touch each other. The detected coordinates are approximate, and to determine their positions
|
|
more accurately, the function calls #cornerSubPix. You also may use the function #cornerSubPix with
|
|
different parameters if returned coordinates are not accurate enough.
|
|
|
|
Sample usage of detecting and drawing chessboard corners: :
|
|
<code>
|
|
Size patternsize(8,6); //interior number of corners
|
|
Mat gray = ....; //source image
|
|
vector<Point2f> corners; //this will be filled by the detected corners
|
|
|
|
//CALIB_CB_FAST_CHECK saves a lot of time on images
|
|
//that do not contain any chessboard corners
|
|
bool patternfound = findChessboardCorners(gray, patternsize, corners,
|
|
CALIB_CB_ADAPTIVE_THRESH + CALIB_CB_NORMALIZE_IMAGE
|
|
+ CALIB_CB_FAST_CHECK);
|
|
|
|
if(patternfound)
|
|
cornerSubPix(gray, corners, Size(11, 11), Size(-1, -1),
|
|
TermCriteria(CV_TERMCRIT_EPS + CV_TERMCRIT_ITER, 30, 0.1));
|
|
|
|
drawChessboardCorners(img, patternsize, Mat(corners), patternfound);
|
|
</code>
|
|
<b>Note:</b> The function requires white space (like a square-thick border, the wider the better) around
|
|
the board to make the detection more robust in various environments. Otherwise, if there is no
|
|
border and the background is dark, the outer black squares cannot be segmented properly and so the
|
|
square grouping and ordering algorithm fails.
|
|
|
|
Use the <code>gen_pattern.py</code> Python script (REF: tutorial_camera_calibration_pattern)
|
|
to create the desired checkerboard pattern.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="findChessboardCorners(org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.MatOfPoint2f)">
|
|
<h3>findChessboardCorners</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">boolean</span> <span class="element-name">findChessboardCorners</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> image,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> patternSize,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> corners)</span></div>
|
|
<div class="block">Finds the positions of internal corners of the chessboard.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>image</code> - Source chessboard view. It must be an 8-bit grayscale or color image.</dd>
|
|
<dd><code>patternSize</code> - Number of inner corners per a chessboard row and column
|
|
( patternSize = cv::Size(points_per_row,points_per_colum) = cv::Size(columns,rows) ).</dd>
|
|
<dd><code>corners</code> - Output array of detected corners.
|
|
<ul>
|
|
<li>
|
|
REF: CALIB_CB_ADAPTIVE_THRESH Use adaptive thresholding to convert the image to black
|
|
and white, rather than a fixed threshold level (computed from the average image brightness).
|
|
</li>
|
|
<li>
|
|
REF: CALIB_CB_NORMALIZE_IMAGE Normalize the image gamma with #equalizeHist before
|
|
applying fixed or adaptive thresholding.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_CB_FILTER_QUADS Use additional criteria (like contour area, perimeter,
|
|
square-like shape) to filter out false quads extracted at the contour retrieval stage.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_CB_FAST_CHECK Run a fast check on the image that looks for chessboard corners,
|
|
and shortcut the call if none is found. This can drastically speed up the call in the
|
|
degenerate condition when no chessboard is observed.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_CB_PLAIN All other flags are ignored. The input image is taken as is.
|
|
No image processing is done to improve to find the checkerboard. This has the effect of speeding up the
|
|
execution of the function but could lead to not recognizing the checkerboard if the image
|
|
is not previously binarized in the appropriate manner.
|
|
</li>
|
|
</ul>
|
|
|
|
The function attempts to determine whether the input image is a view of the chessboard pattern and
|
|
locate the internal chessboard corners. The function returns a non-zero value if all of the corners
|
|
are found and they are placed in a certain order (row by row, left to right in every row).
|
|
Otherwise, if the function fails to find all the corners or reorder them, it returns 0. For example,
|
|
a regular chessboard has 8 x 8 squares and 7 x 7 internal corners, that is, points where the black
|
|
squares touch each other. The detected coordinates are approximate, and to determine their positions
|
|
more accurately, the function calls #cornerSubPix. You also may use the function #cornerSubPix with
|
|
different parameters if returned coordinates are not accurate enough.
|
|
|
|
Sample usage of detecting and drawing chessboard corners: :
|
|
<code>
|
|
Size patternsize(8,6); //interior number of corners
|
|
Mat gray = ....; //source image
|
|
vector<Point2f> corners; //this will be filled by the detected corners
|
|
|
|
//CALIB_CB_FAST_CHECK saves a lot of time on images
|
|
//that do not contain any chessboard corners
|
|
bool patternfound = findChessboardCorners(gray, patternsize, corners,
|
|
CALIB_CB_ADAPTIVE_THRESH + CALIB_CB_NORMALIZE_IMAGE
|
|
+ CALIB_CB_FAST_CHECK);
|
|
|
|
if(patternfound)
|
|
cornerSubPix(gray, corners, Size(11, 11), Size(-1, -1),
|
|
TermCriteria(CV_TERMCRIT_EPS + CV_TERMCRIT_ITER, 30, 0.1));
|
|
|
|
drawChessboardCorners(img, patternsize, Mat(corners), patternfound);
|
|
</code>
|
|
<b>Note:</b> The function requires white space (like a square-thick border, the wider the better) around
|
|
the board to make the detection more robust in various environments. Otherwise, if there is no
|
|
border and the background is dark, the outer black squares cannot be segmented properly and so the
|
|
square grouping and ordering algorithm fails.
|
|
|
|
Use the <code>gen_pattern.py</code> Python script (REF: tutorial_camera_calibration_pattern)
|
|
to create the desired checkerboard pattern.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="checkChessboard(org.opencv.core.Mat,org.opencv.core.Size)">
|
|
<h3>checkChessboard</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">boolean</span> <span class="element-name">checkChessboard</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> img,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> size)</span></div>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="findChessboardCornersSBWithMeta(org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,int,org.opencv.core.Mat)">
|
|
<h3>findChessboardCornersSBWithMeta</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">boolean</span> <span class="element-name">findChessboardCornersSBWithMeta</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> image,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> patternSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> corners,
|
|
int flags,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> meta)</span></div>
|
|
<div class="block">Finds the positions of internal corners of the chessboard using a sector based approach.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>image</code> - Source chessboard view. It must be an 8-bit grayscale or color image.</dd>
|
|
<dd><code>patternSize</code> - Number of inner corners per a chessboard row and column
|
|
( patternSize = cv::Size(points_per_row,points_per_colum) = cv::Size(columns,rows) ).</dd>
|
|
<dd><code>corners</code> - Output array of detected corners.</dd>
|
|
<dd><code>flags</code> - Various operation flags that can be zero or a combination of the following values:
|
|
<ul>
|
|
<li>
|
|
REF: CALIB_CB_NORMALIZE_IMAGE Normalize the image gamma with equalizeHist before detection.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_CB_EXHAUSTIVE Run an exhaustive search to improve detection rate.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_CB_ACCURACY Up sample input image to improve sub-pixel accuracy due to aliasing effects.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_CB_LARGER The detected pattern is allowed to be larger than patternSize (see description).
|
|
</li>
|
|
<li>
|
|
REF: CALIB_CB_MARKER The detected pattern must have a marker (see description).
|
|
This should be used if an accurate camera calibration is required.
|
|
</li>
|
|
</ul></dd>
|
|
<dd><code>meta</code> - Optional output arrray of detected corners (CV_8UC1 and size = cv::Size(columns,rows)).
|
|
Each entry stands for one corner of the pattern and can have one of the following values:
|
|
<ul>
|
|
<li>
|
|
0 = no meta data attached
|
|
</li>
|
|
<li>
|
|
1 = left-top corner of a black cell
|
|
</li>
|
|
<li>
|
|
2 = left-top corner of a white cell
|
|
</li>
|
|
<li>
|
|
3 = left-top corner of a black cell with a white marker dot
|
|
</li>
|
|
<li>
|
|
4 = left-top corner of a white cell with a black marker dot (pattern origin in case of markers otherwise first corner)
|
|
</li>
|
|
</ul>
|
|
|
|
The function is analog to #findChessboardCorners but uses a localized radon
|
|
transformation approximated by box filters being more robust to all sort of
|
|
noise, faster on larger images and is able to directly return the sub-pixel
|
|
position of the internal chessboard corners. The Method is based on the paper
|
|
CITE: duda2018 "Accurate Detection and Localization of Checkerboard Corners for
|
|
Calibration" demonstrating that the returned sub-pixel positions are more
|
|
accurate than the one returned by cornerSubPix allowing a precise camera
|
|
calibration for demanding applications.
|
|
|
|
In the case, the flags REF: CALIB_CB_LARGER or REF: CALIB_CB_MARKER are given,
|
|
the result can be recovered from the optional meta array. Both flags are
|
|
helpful to use calibration patterns exceeding the field of view of the camera.
|
|
These oversized patterns allow more accurate calibrations as corners can be
|
|
utilized, which are as close as possible to the image borders. For a
|
|
consistent coordinate system across all images, the optional marker (see image
|
|
below) can be used to move the origin of the board to the location where the
|
|
black circle is located.
|
|
|
|
<b>Note:</b> The function requires a white boarder with roughly the same width as one
|
|
of the checkerboard fields around the whole board to improve the detection in
|
|
various environments. In addition, because of the localized radon
|
|
transformation it is beneficial to use round corners for the field corners
|
|
which are located on the outside of the board. The following figure illustrates
|
|
a sample checkerboard optimized for the detection. However, any other checkerboard
|
|
can be used as well.
|
|
|
|
Use the <code>gen_pattern.py</code> Python script (REF: tutorial_camera_calibration_pattern)
|
|
to create the corresponding checkerboard pattern:
|
|
\image html pics/checkerboard_radon.png width=60%</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="findChessboardCornersSB(org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,int)">
|
|
<h3>findChessboardCornersSB</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">boolean</span> <span class="element-name">findChessboardCornersSB</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> image,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> patternSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> corners,
|
|
int flags)</span></div>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="findChessboardCornersSB(org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat)">
|
|
<h3>findChessboardCornersSB</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">boolean</span> <span class="element-name">findChessboardCornersSB</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> image,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> patternSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> corners)</span></div>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="estimateChessboardSharpness(org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,float,boolean,org.opencv.core.Mat)">
|
|
<h3>estimateChessboardSharpness</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Scalar.html" title="class in org.opencv.core">Scalar</a></span> <span class="element-name">estimateChessboardSharpness</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> image,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> patternSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> corners,
|
|
float rise_distance,
|
|
boolean vertical,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> sharpness)</span></div>
|
|
<div class="block">Estimates the sharpness of a detected chessboard.
|
|
|
|
Image sharpness, as well as brightness, are a critical parameter for accuracte
|
|
camera calibration. For accessing these parameters for filtering out
|
|
problematic calibraiton images, this method calculates edge profiles by traveling from
|
|
black to white chessboard cell centers. Based on this, the number of pixels is
|
|
calculated required to transit from black to white. This width of the
|
|
transition area is a good indication of how sharp the chessboard is imaged
|
|
and should be below ~3.0 pixels.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>image</code> - Gray image used to find chessboard corners</dd>
|
|
<dd><code>patternSize</code> - Size of a found chessboard pattern</dd>
|
|
<dd><code>corners</code> - Corners found by #findChessboardCornersSB</dd>
|
|
<dd><code>rise_distance</code> - Rise distance 0.8 means 10% ... 90% of the final signal strength</dd>
|
|
<dd><code>vertical</code> - By default edge responses for horizontal lines are calculated</dd>
|
|
<dd><code>sharpness</code> - Optional output array with a sharpness value for calculated edge responses (see description)
|
|
|
|
The optional sharpness array is of type CV_32FC1 and has for each calculated
|
|
profile one row with the following five entries:
|
|
0 = x coordinate of the underlying edge in the image
|
|
1 = y coordinate of the underlying edge in the image
|
|
2 = width of the transition area (sharpness)
|
|
3 = signal strength in the black cell (min brightness)
|
|
4 = signal strength in the white cell (max brightness)</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>Scalar(average sharpness, average min brightness, average max brightness,0)</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="estimateChessboardSharpness(org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,float,boolean)">
|
|
<h3>estimateChessboardSharpness</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Scalar.html" title="class in org.opencv.core">Scalar</a></span> <span class="element-name">estimateChessboardSharpness</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> image,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> patternSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> corners,
|
|
float rise_distance,
|
|
boolean vertical)</span></div>
|
|
<div class="block">Estimates the sharpness of a detected chessboard.
|
|
|
|
Image sharpness, as well as brightness, are a critical parameter for accuracte
|
|
camera calibration. For accessing these parameters for filtering out
|
|
problematic calibraiton images, this method calculates edge profiles by traveling from
|
|
black to white chessboard cell centers. Based on this, the number of pixels is
|
|
calculated required to transit from black to white. This width of the
|
|
transition area is a good indication of how sharp the chessboard is imaged
|
|
and should be below ~3.0 pixels.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>image</code> - Gray image used to find chessboard corners</dd>
|
|
<dd><code>patternSize</code> - Size of a found chessboard pattern</dd>
|
|
<dd><code>corners</code> - Corners found by #findChessboardCornersSB</dd>
|
|
<dd><code>rise_distance</code> - Rise distance 0.8 means 10% ... 90% of the final signal strength</dd>
|
|
<dd><code>vertical</code> - By default edge responses for horizontal lines are calculated
|
|
|
|
The optional sharpness array is of type CV_32FC1 and has for each calculated
|
|
profile one row with the following five entries:
|
|
0 = x coordinate of the underlying edge in the image
|
|
1 = y coordinate of the underlying edge in the image
|
|
2 = width of the transition area (sharpness)
|
|
3 = signal strength in the black cell (min brightness)
|
|
4 = signal strength in the white cell (max brightness)</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>Scalar(average sharpness, average min brightness, average max brightness,0)</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="estimateChessboardSharpness(org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,float)">
|
|
<h3>estimateChessboardSharpness</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Scalar.html" title="class in org.opencv.core">Scalar</a></span> <span class="element-name">estimateChessboardSharpness</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> image,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> patternSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> corners,
|
|
float rise_distance)</span></div>
|
|
<div class="block">Estimates the sharpness of a detected chessboard.
|
|
|
|
Image sharpness, as well as brightness, are a critical parameter for accuracte
|
|
camera calibration. For accessing these parameters for filtering out
|
|
problematic calibraiton images, this method calculates edge profiles by traveling from
|
|
black to white chessboard cell centers. Based on this, the number of pixels is
|
|
calculated required to transit from black to white. This width of the
|
|
transition area is a good indication of how sharp the chessboard is imaged
|
|
and should be below ~3.0 pixels.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>image</code> - Gray image used to find chessboard corners</dd>
|
|
<dd><code>patternSize</code> - Size of a found chessboard pattern</dd>
|
|
<dd><code>corners</code> - Corners found by #findChessboardCornersSB</dd>
|
|
<dd><code>rise_distance</code> - Rise distance 0.8 means 10% ... 90% of the final signal strength
|
|
|
|
The optional sharpness array is of type CV_32FC1 and has for each calculated
|
|
profile one row with the following five entries:
|
|
0 = x coordinate of the underlying edge in the image
|
|
1 = y coordinate of the underlying edge in the image
|
|
2 = width of the transition area (sharpness)
|
|
3 = signal strength in the black cell (min brightness)
|
|
4 = signal strength in the white cell (max brightness)</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>Scalar(average sharpness, average min brightness, average max brightness,0)</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="estimateChessboardSharpness(org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat)">
|
|
<h3>estimateChessboardSharpness</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Scalar.html" title="class in org.opencv.core">Scalar</a></span> <span class="element-name">estimateChessboardSharpness</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> image,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> patternSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> corners)</span></div>
|
|
<div class="block">Estimates the sharpness of a detected chessboard.
|
|
|
|
Image sharpness, as well as brightness, are a critical parameter for accuracte
|
|
camera calibration. For accessing these parameters for filtering out
|
|
problematic calibraiton images, this method calculates edge profiles by traveling from
|
|
black to white chessboard cell centers. Based on this, the number of pixels is
|
|
calculated required to transit from black to white. This width of the
|
|
transition area is a good indication of how sharp the chessboard is imaged
|
|
and should be below ~3.0 pixels.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>image</code> - Gray image used to find chessboard corners</dd>
|
|
<dd><code>patternSize</code> - Size of a found chessboard pattern</dd>
|
|
<dd><code>corners</code> - Corners found by #findChessboardCornersSB
|
|
|
|
The optional sharpness array is of type CV_32FC1 and has for each calculated
|
|
profile one row with the following five entries:
|
|
0 = x coordinate of the underlying edge in the image
|
|
1 = y coordinate of the underlying edge in the image
|
|
2 = width of the transition area (sharpness)
|
|
3 = signal strength in the black cell (min brightness)
|
|
4 = signal strength in the white cell (max brightness)</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>Scalar(average sharpness, average min brightness, average max brightness,0)</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="find4QuadCornerSubpix(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size)">
|
|
<h3>find4QuadCornerSubpix</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">boolean</span> <span class="element-name">find4QuadCornerSubpix</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> img,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> corners,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> region_size)</span></div>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="drawChessboardCorners(org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.MatOfPoint2f,boolean)">
|
|
<h3>drawChessboardCorners</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">drawChessboardCorners</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> image,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> patternSize,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> corners,
|
|
boolean patternWasFound)</span></div>
|
|
<div class="block">Renders the detected chessboard corners.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>image</code> - Destination image. It must be an 8-bit color image.</dd>
|
|
<dd><code>patternSize</code> - Number of inner corners per a chessboard row and column
|
|
(patternSize = cv::Size(points_per_row,points_per_column)).</dd>
|
|
<dd><code>corners</code> - Array of detected corners, the output of #findChessboardCorners.</dd>
|
|
<dd><code>patternWasFound</code> - Parameter indicating whether the complete board was found or not. The
|
|
return value of #findChessboardCorners should be passed here.
|
|
|
|
The function draws individual chessboard corners detected either as red circles if the board was not
|
|
found, or as colored corners connected with lines if the board was found.</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="drawFrameAxes(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,float,int)">
|
|
<h3>drawFrameAxes</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">drawFrameAxes</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> image,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
float length,
|
|
int thickness)</span></div>
|
|
<div class="block">Draw axes of the world/object coordinate system from pose estimation. SEE: solvePnP</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>image</code> - Input/output image. It must have 1 or 3 channels. The number of channels is not altered.</dd>
|
|
<dd><code>cameraMatrix</code> - Input 3x3 floating-point matrix of camera intrinsic parameters.
|
|
\(\cameramatrix{A}\)</dd>
|
|
<dd><code>distCoeffs</code> - Input vector of distortion coefficients
|
|
\(\distcoeffs\). If the vector is empty, the zero distortion coefficients are assumed.</dd>
|
|
<dd><code>rvec</code> - Rotation vector (see REF: Rodrigues ) that, together with tvec, brings points from
|
|
the model coordinate system to the camera coordinate system.</dd>
|
|
<dd><code>tvec</code> - Translation vector.</dd>
|
|
<dd><code>length</code> - Length of the painted axes in the same unit than tvec (usually in meters).</dd>
|
|
<dd><code>thickness</code> - Line thickness of the painted axes.
|
|
|
|
This function draws the axes of the world/object coordinate system w.r.t. to the camera frame.
|
|
OX is drawn in red, OY in green and OZ in blue.</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="drawFrameAxes(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,float)">
|
|
<h3>drawFrameAxes</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">drawFrameAxes</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> image,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
float length)</span></div>
|
|
<div class="block">Draw axes of the world/object coordinate system from pose estimation. SEE: solvePnP</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>image</code> - Input/output image. It must have 1 or 3 channels. The number of channels is not altered.</dd>
|
|
<dd><code>cameraMatrix</code> - Input 3x3 floating-point matrix of camera intrinsic parameters.
|
|
\(\cameramatrix{A}\)</dd>
|
|
<dd><code>distCoeffs</code> - Input vector of distortion coefficients
|
|
\(\distcoeffs\). If the vector is empty, the zero distortion coefficients are assumed.</dd>
|
|
<dd><code>rvec</code> - Rotation vector (see REF: Rodrigues ) that, together with tvec, brings points from
|
|
the model coordinate system to the camera coordinate system.</dd>
|
|
<dd><code>tvec</code> - Translation vector.</dd>
|
|
<dd><code>length</code> - Length of the painted axes in the same unit than tvec (usually in meters).
|
|
|
|
This function draws the axes of the world/object coordinate system w.r.t. to the camera frame.
|
|
OX is drawn in red, OY in green and OZ in blue.</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="findCirclesGrid(org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,int)">
|
|
<h3>findCirclesGrid</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">boolean</span> <span class="element-name">findCirclesGrid</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> image,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> patternSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> centers,
|
|
int flags)</span></div>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="findCirclesGrid(org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat)">
|
|
<h3>findCirclesGrid</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">boolean</span> <span class="element-name">findCirclesGrid</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> image,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> patternSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> centers)</span></div>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="calibrateCameraExtended(java.util.List,java.util.List,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,org.opencv.core.TermCriteria)">
|
|
<h3>calibrateCameraExtended</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">double</span> <span class="element-name">calibrateCameraExtended</span><wbr><span class="parameters">(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> stdDeviationsIntrinsics,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> stdDeviationsExtrinsics,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> perViewErrors,
|
|
int flags,
|
|
<a href="../core/TermCriteria.html" title="class in org.opencv.core">TermCriteria</a> criteria)</span></div>
|
|
<div class="block">Finds the camera intrinsic and extrinsic parameters from several views of a calibration
|
|
pattern.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - In the new interface it is a vector of vectors of calibration pattern points in
|
|
the calibration pattern coordinate space (e.g. std::vector<std::vector<cv::Vec3f>>). The outer
|
|
vector contains as many elements as the number of pattern views. If the same calibration pattern
|
|
is shown in each view and it is fully visible, all the vectors will be the same. Although, it is
|
|
possible to use partially occluded patterns or even different patterns in different views. Then,
|
|
the vectors will be different. Although the points are 3D, they all lie in the calibration pattern's
|
|
XY coordinate plane (thus 0 in the Z-coordinate), if the used calibration pattern is a planar rig.
|
|
In the old interface all the vectors of object points from different views are concatenated
|
|
together.</dd>
|
|
<dd><code>imagePoints</code> - In the new interface it is a vector of vectors of the projections of calibration
|
|
pattern points (e.g. std::vector<std::vector<cv::Vec2f>>). imagePoints.size() and
|
|
objectPoints.size(), and imagePoints[i].size() and objectPoints[i].size() for each i, must be equal,
|
|
respectively. In the old interface all the vectors of object points from different views are
|
|
concatenated together.</dd>
|
|
<dd><code>imageSize</code> - Size of the image used only to initialize the camera intrinsic matrix.</dd>
|
|
<dd><code>cameraMatrix</code> - Input/output 3x3 floating-point camera intrinsic matrix
|
|
\(\cameramatrix{A}\) . If REF: CALIB_USE_INTRINSIC_GUESS
|
|
and/or REF: CALIB_FIX_ASPECT_RATIO, REF: CALIB_FIX_PRINCIPAL_POINT or REF: CALIB_FIX_FOCAL_LENGTH
|
|
are specified, some or all of fx, fy, cx, cy must be initialized before calling the function.</dd>
|
|
<dd><code>distCoeffs</code> - Input/output vector of distortion coefficients
|
|
\(\distcoeffs\).</dd>
|
|
<dd><code>rvecs</code> - Output vector of rotation vectors (REF: Rodrigues ) estimated for each pattern view
|
|
(e.g. std::vector<cv::Mat>>). That is, each i-th rotation vector together with the corresponding
|
|
i-th translation vector (see the next output parameter description) brings the calibration pattern
|
|
from the object coordinate space (in which object points are specified) to the camera coordinate
|
|
space. In more technical terms, the tuple of the i-th rotation and translation vector performs
|
|
a change of basis from object coordinate space to camera coordinate space. Due to its duality, this
|
|
tuple is equivalent to the position of the calibration pattern with respect to the camera coordinate
|
|
space.</dd>
|
|
<dd><code>tvecs</code> - Output vector of translation vectors estimated for each pattern view, see parameter
|
|
describtion above.</dd>
|
|
<dd><code>stdDeviationsIntrinsics</code> - Output vector of standard deviations estimated for intrinsic
|
|
parameters. Order of deviations values:
|
|
\((f_x, f_y, c_x, c_y, k_1, k_2, p_1, p_2, k_3, k_4, k_5, k_6 , s_1, s_2, s_3,
|
|
s_4, \tau_x, \tau_y)\) If one of parameters is not estimated, it's deviation is equals to zero.</dd>
|
|
<dd><code>stdDeviationsExtrinsics</code> - Output vector of standard deviations estimated for extrinsic
|
|
parameters. Order of deviations values: \((R_0, T_0, \dotsc , R_{M - 1}, T_{M - 1})\) where M is
|
|
the number of pattern views. \(R_i, T_i\) are concatenated 1x3 vectors.</dd>
|
|
<dd><code>perViewErrors</code> - Output vector of the RMS re-projection error estimated for each pattern view.</dd>
|
|
<dd><code>flags</code> - Different flags that may be zero or a combination of the following values:
|
|
<ul>
|
|
<li>
|
|
REF: CALIB_USE_INTRINSIC_GUESS cameraMatrix contains valid initial values of
|
|
fx, fy, cx, cy that are optimized further. Otherwise, (cx, cy) is initially set to the image
|
|
center ( imageSize is used), and focal distances are computed in a least-squares fashion.
|
|
Note, that if intrinsic parameters are known, there is no need to use this function just to
|
|
estimate extrinsic parameters. Use REF: solvePnP instead.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_FIX_PRINCIPAL_POINT The principal point is not changed during the global
|
|
optimization. It stays at the center or at a different location specified when
|
|
REF: CALIB_USE_INTRINSIC_GUESS is set too.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_FIX_ASPECT_RATIO The functions consider only fy as a free parameter. The
|
|
ratio fx/fy stays the same as in the input cameraMatrix . When
|
|
REF: CALIB_USE_INTRINSIC_GUESS is not set, the actual input values of fx and fy are
|
|
ignored, only their ratio is computed and used further.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_ZERO_TANGENT_DIST Tangential distortion coefficients \((p_1, p_2)\) are set
|
|
to zeros and stay zero.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_FIX_FOCAL_LENGTH The focal length is not changed during the global optimization if
|
|
REF: CALIB_USE_INTRINSIC_GUESS is set.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_FIX_K1,..., REF: CALIB_FIX_K6 The corresponding radial distortion
|
|
coefficient is not changed during the optimization. If REF: CALIB_USE_INTRINSIC_GUESS is
|
|
set, the coefficient from the supplied distCoeffs matrix is used. Otherwise, it is set to 0.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_RATIONAL_MODEL Coefficients k4, k5, and k6 are enabled. To provide the
|
|
backward compatibility, this extra flag should be explicitly specified to make the
|
|
calibration function use the rational model and return 8 coefficients or more.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_THIN_PRISM_MODEL Coefficients s1, s2, s3 and s4 are enabled. To provide the
|
|
backward compatibility, this extra flag should be explicitly specified to make the
|
|
calibration function use the thin prism model and return 12 coefficients or more.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_FIX_S1_S2_S3_S4 The thin prism distortion coefficients are not changed during
|
|
the optimization. If REF: CALIB_USE_INTRINSIC_GUESS is set, the coefficient from the
|
|
supplied distCoeffs matrix is used. Otherwise, it is set to 0.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_TILTED_MODEL Coefficients tauX and tauY are enabled. To provide the
|
|
backward compatibility, this extra flag should be explicitly specified to make the
|
|
calibration function use the tilted sensor model and return 14 coefficients.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_FIX_TAUX_TAUY The coefficients of the tilted sensor model are not changed during
|
|
the optimization. If REF: CALIB_USE_INTRINSIC_GUESS is set, the coefficient from the
|
|
supplied distCoeffs matrix is used. Otherwise, it is set to 0.
|
|
</li>
|
|
</ul></dd>
|
|
<dd><code>criteria</code> - Termination criteria for the iterative optimization algorithm.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>the overall RMS re-projection error.
|
|
|
|
The function estimates the intrinsic camera parameters and extrinsic parameters for each of the
|
|
views. The algorithm is based on CITE: Zhang2000 and CITE: BouguetMCT . The coordinates of 3D object
|
|
points and their corresponding 2D projections in each view must be specified. That may be achieved
|
|
by using an object with known geometry and easily detectable feature points. Such an object is
|
|
called a calibration rig or calibration pattern, and OpenCV has built-in support for a chessboard as
|
|
a calibration rig (see REF: findChessboardCorners). Currently, initialization of intrinsic
|
|
parameters (when REF: CALIB_USE_INTRINSIC_GUESS is not set) is only implemented for planar calibration
|
|
patterns (where Z-coordinates of the object points must be all zeros). 3D calibration rigs can also
|
|
be used as long as initial cameraMatrix is provided.
|
|
|
|
The algorithm performs the following steps:
|
|
|
|
<ul>
|
|
<li>
|
|
Compute the initial intrinsic parameters (the option only available for planar calibration
|
|
patterns) or read them from the input parameters. The distortion coefficients are all set to
|
|
zeros initially unless some of CALIB_FIX_K? are specified.
|
|
</li>
|
|
</ul>
|
|
|
|
<ul>
|
|
<li>
|
|
Estimate the initial camera pose as if the intrinsic parameters have been already known. This is
|
|
done using REF: solvePnP .
|
|
</li>
|
|
</ul>
|
|
|
|
<ul>
|
|
<li>
|
|
Run the global Levenberg-Marquardt optimization algorithm to minimize the reprojection error,
|
|
that is, the total sum of squared distances between the observed feature points imagePoints and
|
|
the projected (using the current estimates for camera parameters and the poses) object points
|
|
objectPoints. See REF: projectPoints for details.
|
|
</li>
|
|
</ul>
|
|
|
|
<b>Note:</b>
|
|
If you use a non-square (i.e. non-N-by-N) grid and REF: findChessboardCorners for calibration,
|
|
and REF: calibrateCamera returns bad values (zero distortion coefficients, \(c_x\) and
|
|
\(c_y\) very far from the image center, and/or large differences between \(f_x\) and
|
|
\(f_y\) (ratios of 10:1 or more)), then you are probably using patternSize=cvSize(rows,cols)
|
|
instead of using patternSize=cvSize(cols,rows) in REF: findChessboardCorners.
|
|
|
|
<b>Note:</b>
|
|
The function may throw exceptions, if unsupported combination of parameters is provided or
|
|
the system is underconstrained.
|
|
|
|
SEE:
|
|
calibrateCameraRO, findChessboardCorners, solvePnP, initCameraMatrix2D, stereoCalibrate,
|
|
undistort</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="calibrateCameraExtended(java.util.List,java.util.List,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int)">
|
|
<h3>calibrateCameraExtended</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">double</span> <span class="element-name">calibrateCameraExtended</span><wbr><span class="parameters">(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> stdDeviationsIntrinsics,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> stdDeviationsExtrinsics,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> perViewErrors,
|
|
int flags)</span></div>
|
|
<div class="block">Finds the camera intrinsic and extrinsic parameters from several views of a calibration
|
|
pattern.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - In the new interface it is a vector of vectors of calibration pattern points in
|
|
the calibration pattern coordinate space (e.g. std::vector<std::vector<cv::Vec3f>>). The outer
|
|
vector contains as many elements as the number of pattern views. If the same calibration pattern
|
|
is shown in each view and it is fully visible, all the vectors will be the same. Although, it is
|
|
possible to use partially occluded patterns or even different patterns in different views. Then,
|
|
the vectors will be different. Although the points are 3D, they all lie in the calibration pattern's
|
|
XY coordinate plane (thus 0 in the Z-coordinate), if the used calibration pattern is a planar rig.
|
|
In the old interface all the vectors of object points from different views are concatenated
|
|
together.</dd>
|
|
<dd><code>imagePoints</code> - In the new interface it is a vector of vectors of the projections of calibration
|
|
pattern points (e.g. std::vector<std::vector<cv::Vec2f>>). imagePoints.size() and
|
|
objectPoints.size(), and imagePoints[i].size() and objectPoints[i].size() for each i, must be equal,
|
|
respectively. In the old interface all the vectors of object points from different views are
|
|
concatenated together.</dd>
|
|
<dd><code>imageSize</code> - Size of the image used only to initialize the camera intrinsic matrix.</dd>
|
|
<dd><code>cameraMatrix</code> - Input/output 3x3 floating-point camera intrinsic matrix
|
|
\(\cameramatrix{A}\) . If REF: CALIB_USE_INTRINSIC_GUESS
|
|
and/or REF: CALIB_FIX_ASPECT_RATIO, REF: CALIB_FIX_PRINCIPAL_POINT or REF: CALIB_FIX_FOCAL_LENGTH
|
|
are specified, some or all of fx, fy, cx, cy must be initialized before calling the function.</dd>
|
|
<dd><code>distCoeffs</code> - Input/output vector of distortion coefficients
|
|
\(\distcoeffs\).</dd>
|
|
<dd><code>rvecs</code> - Output vector of rotation vectors (REF: Rodrigues ) estimated for each pattern view
|
|
(e.g. std::vector<cv::Mat>>). That is, each i-th rotation vector together with the corresponding
|
|
i-th translation vector (see the next output parameter description) brings the calibration pattern
|
|
from the object coordinate space (in which object points are specified) to the camera coordinate
|
|
space. In more technical terms, the tuple of the i-th rotation and translation vector performs
|
|
a change of basis from object coordinate space to camera coordinate space. Due to its duality, this
|
|
tuple is equivalent to the position of the calibration pattern with respect to the camera coordinate
|
|
space.</dd>
|
|
<dd><code>tvecs</code> - Output vector of translation vectors estimated for each pattern view, see parameter
|
|
describtion above.</dd>
|
|
<dd><code>stdDeviationsIntrinsics</code> - Output vector of standard deviations estimated for intrinsic
|
|
parameters. Order of deviations values:
|
|
\((f_x, f_y, c_x, c_y, k_1, k_2, p_1, p_2, k_3, k_4, k_5, k_6 , s_1, s_2, s_3,
|
|
s_4, \tau_x, \tau_y)\) If one of parameters is not estimated, it's deviation is equals to zero.</dd>
|
|
<dd><code>stdDeviationsExtrinsics</code> - Output vector of standard deviations estimated for extrinsic
|
|
parameters. Order of deviations values: \((R_0, T_0, \dotsc , R_{M - 1}, T_{M - 1})\) where M is
|
|
the number of pattern views. \(R_i, T_i\) are concatenated 1x3 vectors.</dd>
|
|
<dd><code>perViewErrors</code> - Output vector of the RMS re-projection error estimated for each pattern view.</dd>
|
|
<dd><code>flags</code> - Different flags that may be zero or a combination of the following values:
|
|
<ul>
|
|
<li>
|
|
REF: CALIB_USE_INTRINSIC_GUESS cameraMatrix contains valid initial values of
|
|
fx, fy, cx, cy that are optimized further. Otherwise, (cx, cy) is initially set to the image
|
|
center ( imageSize is used), and focal distances are computed in a least-squares fashion.
|
|
Note, that if intrinsic parameters are known, there is no need to use this function just to
|
|
estimate extrinsic parameters. Use REF: solvePnP instead.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_FIX_PRINCIPAL_POINT The principal point is not changed during the global
|
|
optimization. It stays at the center or at a different location specified when
|
|
REF: CALIB_USE_INTRINSIC_GUESS is set too.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_FIX_ASPECT_RATIO The functions consider only fy as a free parameter. The
|
|
ratio fx/fy stays the same as in the input cameraMatrix . When
|
|
REF: CALIB_USE_INTRINSIC_GUESS is not set, the actual input values of fx and fy are
|
|
ignored, only their ratio is computed and used further.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_ZERO_TANGENT_DIST Tangential distortion coefficients \((p_1, p_2)\) are set
|
|
to zeros and stay zero.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_FIX_FOCAL_LENGTH The focal length is not changed during the global optimization if
|
|
REF: CALIB_USE_INTRINSIC_GUESS is set.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_FIX_K1,..., REF: CALIB_FIX_K6 The corresponding radial distortion
|
|
coefficient is not changed during the optimization. If REF: CALIB_USE_INTRINSIC_GUESS is
|
|
set, the coefficient from the supplied distCoeffs matrix is used. Otherwise, it is set to 0.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_RATIONAL_MODEL Coefficients k4, k5, and k6 are enabled. To provide the
|
|
backward compatibility, this extra flag should be explicitly specified to make the
|
|
calibration function use the rational model and return 8 coefficients or more.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_THIN_PRISM_MODEL Coefficients s1, s2, s3 and s4 are enabled. To provide the
|
|
backward compatibility, this extra flag should be explicitly specified to make the
|
|
calibration function use the thin prism model and return 12 coefficients or more.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_FIX_S1_S2_S3_S4 The thin prism distortion coefficients are not changed during
|
|
the optimization. If REF: CALIB_USE_INTRINSIC_GUESS is set, the coefficient from the
|
|
supplied distCoeffs matrix is used. Otherwise, it is set to 0.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_TILTED_MODEL Coefficients tauX and tauY are enabled. To provide the
|
|
backward compatibility, this extra flag should be explicitly specified to make the
|
|
calibration function use the tilted sensor model and return 14 coefficients.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_FIX_TAUX_TAUY The coefficients of the tilted sensor model are not changed during
|
|
the optimization. If REF: CALIB_USE_INTRINSIC_GUESS is set, the coefficient from the
|
|
supplied distCoeffs matrix is used. Otherwise, it is set to 0.
|
|
</li>
|
|
</ul></dd>
|
|
<dt>Returns:</dt>
|
|
<dd>the overall RMS re-projection error.
|
|
|
|
The function estimates the intrinsic camera parameters and extrinsic parameters for each of the
|
|
views. The algorithm is based on CITE: Zhang2000 and CITE: BouguetMCT . The coordinates of 3D object
|
|
points and their corresponding 2D projections in each view must be specified. That may be achieved
|
|
by using an object with known geometry and easily detectable feature points. Such an object is
|
|
called a calibration rig or calibration pattern, and OpenCV has built-in support for a chessboard as
|
|
a calibration rig (see REF: findChessboardCorners). Currently, initialization of intrinsic
|
|
parameters (when REF: CALIB_USE_INTRINSIC_GUESS is not set) is only implemented for planar calibration
|
|
patterns (where Z-coordinates of the object points must be all zeros). 3D calibration rigs can also
|
|
be used as long as initial cameraMatrix is provided.
|
|
|
|
The algorithm performs the following steps:
|
|
|
|
<ul>
|
|
<li>
|
|
Compute the initial intrinsic parameters (the option only available for planar calibration
|
|
patterns) or read them from the input parameters. The distortion coefficients are all set to
|
|
zeros initially unless some of CALIB_FIX_K? are specified.
|
|
</li>
|
|
</ul>
|
|
|
|
<ul>
|
|
<li>
|
|
Estimate the initial camera pose as if the intrinsic parameters have been already known. This is
|
|
done using REF: solvePnP .
|
|
</li>
|
|
</ul>
|
|
|
|
<ul>
|
|
<li>
|
|
Run the global Levenberg-Marquardt optimization algorithm to minimize the reprojection error,
|
|
that is, the total sum of squared distances between the observed feature points imagePoints and
|
|
the projected (using the current estimates for camera parameters and the poses) object points
|
|
objectPoints. See REF: projectPoints for details.
|
|
</li>
|
|
</ul>
|
|
|
|
<b>Note:</b>
|
|
If you use a non-square (i.e. non-N-by-N) grid and REF: findChessboardCorners for calibration,
|
|
and REF: calibrateCamera returns bad values (zero distortion coefficients, \(c_x\) and
|
|
\(c_y\) very far from the image center, and/or large differences between \(f_x\) and
|
|
\(f_y\) (ratios of 10:1 or more)), then you are probably using patternSize=cvSize(rows,cols)
|
|
instead of using patternSize=cvSize(cols,rows) in REF: findChessboardCorners.
|
|
|
|
<b>Note:</b>
|
|
The function may throw exceptions, if unsupported combination of parameters is provided or
|
|
the system is underconstrained.
|
|
|
|
SEE:
|
|
calibrateCameraRO, findChessboardCorners, solvePnP, initCameraMatrix2D, stereoCalibrate,
|
|
undistort</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="calibrateCameraExtended(java.util.List,java.util.List,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>calibrateCameraExtended</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">double</span> <span class="element-name">calibrateCameraExtended</span><wbr><span class="parameters">(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> stdDeviationsIntrinsics,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> stdDeviationsExtrinsics,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> perViewErrors)</span></div>
|
|
<div class="block">Finds the camera intrinsic and extrinsic parameters from several views of a calibration
|
|
pattern.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - In the new interface it is a vector of vectors of calibration pattern points in
|
|
the calibration pattern coordinate space (e.g. std::vector<std::vector<cv::Vec3f>>). The outer
|
|
vector contains as many elements as the number of pattern views. If the same calibration pattern
|
|
is shown in each view and it is fully visible, all the vectors will be the same. Although, it is
|
|
possible to use partially occluded patterns or even different patterns in different views. Then,
|
|
the vectors will be different. Although the points are 3D, they all lie in the calibration pattern's
|
|
XY coordinate plane (thus 0 in the Z-coordinate), if the used calibration pattern is a planar rig.
|
|
In the old interface all the vectors of object points from different views are concatenated
|
|
together.</dd>
|
|
<dd><code>imagePoints</code> - In the new interface it is a vector of vectors of the projections of calibration
|
|
pattern points (e.g. std::vector<std::vector<cv::Vec2f>>). imagePoints.size() and
|
|
objectPoints.size(), and imagePoints[i].size() and objectPoints[i].size() for each i, must be equal,
|
|
respectively. In the old interface all the vectors of object points from different views are
|
|
concatenated together.</dd>
|
|
<dd><code>imageSize</code> - Size of the image used only to initialize the camera intrinsic matrix.</dd>
|
|
<dd><code>cameraMatrix</code> - Input/output 3x3 floating-point camera intrinsic matrix
|
|
\(\cameramatrix{A}\) . If REF: CALIB_USE_INTRINSIC_GUESS
|
|
and/or REF: CALIB_FIX_ASPECT_RATIO, REF: CALIB_FIX_PRINCIPAL_POINT or REF: CALIB_FIX_FOCAL_LENGTH
|
|
are specified, some or all of fx, fy, cx, cy must be initialized before calling the function.</dd>
|
|
<dd><code>distCoeffs</code> - Input/output vector of distortion coefficients
|
|
\(\distcoeffs\).</dd>
|
|
<dd><code>rvecs</code> - Output vector of rotation vectors (REF: Rodrigues ) estimated for each pattern view
|
|
(e.g. std::vector<cv::Mat>>). That is, each i-th rotation vector together with the corresponding
|
|
i-th translation vector (see the next output parameter description) brings the calibration pattern
|
|
from the object coordinate space (in which object points are specified) to the camera coordinate
|
|
space. In more technical terms, the tuple of the i-th rotation and translation vector performs
|
|
a change of basis from object coordinate space to camera coordinate space. Due to its duality, this
|
|
tuple is equivalent to the position of the calibration pattern with respect to the camera coordinate
|
|
space.</dd>
|
|
<dd><code>tvecs</code> - Output vector of translation vectors estimated for each pattern view, see parameter
|
|
describtion above.</dd>
|
|
<dd><code>stdDeviationsIntrinsics</code> - Output vector of standard deviations estimated for intrinsic
|
|
parameters. Order of deviations values:
|
|
\((f_x, f_y, c_x, c_y, k_1, k_2, p_1, p_2, k_3, k_4, k_5, k_6 , s_1, s_2, s_3,
|
|
s_4, \tau_x, \tau_y)\) If one of parameters is not estimated, it's deviation is equals to zero.</dd>
|
|
<dd><code>stdDeviationsExtrinsics</code> - Output vector of standard deviations estimated for extrinsic
|
|
parameters. Order of deviations values: \((R_0, T_0, \dotsc , R_{M - 1}, T_{M - 1})\) where M is
|
|
the number of pattern views. \(R_i, T_i\) are concatenated 1x3 vectors.</dd>
|
|
<dd><code>perViewErrors</code> - Output vector of the RMS re-projection error estimated for each pattern view.
|
|
<ul>
|
|
<li>
|
|
REF: CALIB_USE_INTRINSIC_GUESS cameraMatrix contains valid initial values of
|
|
fx, fy, cx, cy that are optimized further. Otherwise, (cx, cy) is initially set to the image
|
|
center ( imageSize is used), and focal distances are computed in a least-squares fashion.
|
|
Note, that if intrinsic parameters are known, there is no need to use this function just to
|
|
estimate extrinsic parameters. Use REF: solvePnP instead.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_FIX_PRINCIPAL_POINT The principal point is not changed during the global
|
|
optimization. It stays at the center or at a different location specified when
|
|
REF: CALIB_USE_INTRINSIC_GUESS is set too.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_FIX_ASPECT_RATIO The functions consider only fy as a free parameter. The
|
|
ratio fx/fy stays the same as in the input cameraMatrix . When
|
|
REF: CALIB_USE_INTRINSIC_GUESS is not set, the actual input values of fx and fy are
|
|
ignored, only their ratio is computed and used further.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_ZERO_TANGENT_DIST Tangential distortion coefficients \((p_1, p_2)\) are set
|
|
to zeros and stay zero.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_FIX_FOCAL_LENGTH The focal length is not changed during the global optimization if
|
|
REF: CALIB_USE_INTRINSIC_GUESS is set.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_FIX_K1,..., REF: CALIB_FIX_K6 The corresponding radial distortion
|
|
coefficient is not changed during the optimization. If REF: CALIB_USE_INTRINSIC_GUESS is
|
|
set, the coefficient from the supplied distCoeffs matrix is used. Otherwise, it is set to 0.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_RATIONAL_MODEL Coefficients k4, k5, and k6 are enabled. To provide the
|
|
backward compatibility, this extra flag should be explicitly specified to make the
|
|
calibration function use the rational model and return 8 coefficients or more.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_THIN_PRISM_MODEL Coefficients s1, s2, s3 and s4 are enabled. To provide the
|
|
backward compatibility, this extra flag should be explicitly specified to make the
|
|
calibration function use the thin prism model and return 12 coefficients or more.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_FIX_S1_S2_S3_S4 The thin prism distortion coefficients are not changed during
|
|
the optimization. If REF: CALIB_USE_INTRINSIC_GUESS is set, the coefficient from the
|
|
supplied distCoeffs matrix is used. Otherwise, it is set to 0.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_TILTED_MODEL Coefficients tauX and tauY are enabled. To provide the
|
|
backward compatibility, this extra flag should be explicitly specified to make the
|
|
calibration function use the tilted sensor model and return 14 coefficients.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_FIX_TAUX_TAUY The coefficients of the tilted sensor model are not changed during
|
|
the optimization. If REF: CALIB_USE_INTRINSIC_GUESS is set, the coefficient from the
|
|
supplied distCoeffs matrix is used. Otherwise, it is set to 0.
|
|
</li>
|
|
</ul></dd>
|
|
<dt>Returns:</dt>
|
|
<dd>the overall RMS re-projection error.
|
|
|
|
The function estimates the intrinsic camera parameters and extrinsic parameters for each of the
|
|
views. The algorithm is based on CITE: Zhang2000 and CITE: BouguetMCT . The coordinates of 3D object
|
|
points and their corresponding 2D projections in each view must be specified. That may be achieved
|
|
by using an object with known geometry and easily detectable feature points. Such an object is
|
|
called a calibration rig or calibration pattern, and OpenCV has built-in support for a chessboard as
|
|
a calibration rig (see REF: findChessboardCorners). Currently, initialization of intrinsic
|
|
parameters (when REF: CALIB_USE_INTRINSIC_GUESS is not set) is only implemented for planar calibration
|
|
patterns (where Z-coordinates of the object points must be all zeros). 3D calibration rigs can also
|
|
be used as long as initial cameraMatrix is provided.
|
|
|
|
The algorithm performs the following steps:
|
|
|
|
<ul>
|
|
<li>
|
|
Compute the initial intrinsic parameters (the option only available for planar calibration
|
|
patterns) or read them from the input parameters. The distortion coefficients are all set to
|
|
zeros initially unless some of CALIB_FIX_K? are specified.
|
|
</li>
|
|
</ul>
|
|
|
|
<ul>
|
|
<li>
|
|
Estimate the initial camera pose as if the intrinsic parameters have been already known. This is
|
|
done using REF: solvePnP .
|
|
</li>
|
|
</ul>
|
|
|
|
<ul>
|
|
<li>
|
|
Run the global Levenberg-Marquardt optimization algorithm to minimize the reprojection error,
|
|
that is, the total sum of squared distances between the observed feature points imagePoints and
|
|
the projected (using the current estimates for camera parameters and the poses) object points
|
|
objectPoints. See REF: projectPoints for details.
|
|
</li>
|
|
</ul>
|
|
|
|
<b>Note:</b>
|
|
If you use a non-square (i.e. non-N-by-N) grid and REF: findChessboardCorners for calibration,
|
|
and REF: calibrateCamera returns bad values (zero distortion coefficients, \(c_x\) and
|
|
\(c_y\) very far from the image center, and/or large differences between \(f_x\) and
|
|
\(f_y\) (ratios of 10:1 or more)), then you are probably using patternSize=cvSize(rows,cols)
|
|
instead of using patternSize=cvSize(cols,rows) in REF: findChessboardCorners.
|
|
|
|
<b>Note:</b>
|
|
The function may throw exceptions, if unsupported combination of parameters is provided or
|
|
the system is underconstrained.
|
|
|
|
SEE:
|
|
calibrateCameraRO, findChessboardCorners, solvePnP, initCameraMatrix2D, stereoCalibrate,
|
|
undistort</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="calibrateCamera(java.util.List,java.util.List,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List,int,org.opencv.core.TermCriteria)">
|
|
<h3>calibrateCamera</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">double</span> <span class="element-name">calibrateCamera</span><wbr><span class="parameters">(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs,
|
|
int flags,
|
|
<a href="../core/TermCriteria.html" title="class in org.opencv.core">TermCriteria</a> criteria)</span></div>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="calibrateCamera(java.util.List,java.util.List,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List,int)">
|
|
<h3>calibrateCamera</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">double</span> <span class="element-name">calibrateCamera</span><wbr><span class="parameters">(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs,
|
|
int flags)</span></div>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="calibrateCamera(java.util.List,java.util.List,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List)">
|
|
<h3>calibrateCamera</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">double</span> <span class="element-name">calibrateCamera</span><wbr><span class="parameters">(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs)</span></div>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="calibrateCameraROExtended(java.util.List,java.util.List,org.opencv.core.Size,int,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,org.opencv.core.TermCriteria)">
|
|
<h3>calibrateCameraROExtended</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">double</span> <span class="element-name">calibrateCameraROExtended</span><wbr><span class="parameters">(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
int iFixedPoint,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> newObjPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> stdDeviationsIntrinsics,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> stdDeviationsExtrinsics,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> stdDeviationsObjPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> perViewErrors,
|
|
int flags,
|
|
<a href="../core/TermCriteria.html" title="class in org.opencv.core">TermCriteria</a> criteria)</span></div>
|
|
<div class="block">Finds the camera intrinsic and extrinsic parameters from several views of a calibration pattern.
|
|
|
|
This function is an extension of #calibrateCamera with the method of releasing object which was
|
|
proposed in CITE: strobl2011iccv. In many common cases with inaccurate, unmeasured, roughly planar
|
|
targets (calibration plates), this method can dramatically improve the precision of the estimated
|
|
camera parameters. Both the object-releasing method and standard method are supported by this
|
|
function. Use the parameter <b>iFixedPoint</b> for method selection. In the internal implementation,
|
|
#calibrateCamera is a wrapper for this function.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - Vector of vectors of calibration pattern points in the calibration pattern
|
|
coordinate space. See #calibrateCamera for details. If the method of releasing object to be used,
|
|
the identical calibration board must be used in each view and it must be fully visible, and all
|
|
objectPoints[i] must be the same and all points should be roughly close to a plane. <b>The calibration
|
|
target has to be rigid, or at least static if the camera (rather than the calibration target) is
|
|
shifted for grabbing images.</b></dd>
|
|
<dd><code>imagePoints</code> - Vector of vectors of the projections of calibration pattern points. See
|
|
#calibrateCamera for details.</dd>
|
|
<dd><code>imageSize</code> - Size of the image used only to initialize the intrinsic camera matrix.</dd>
|
|
<dd><code>iFixedPoint</code> - The index of the 3D object point in objectPoints[0] to be fixed. It also acts as
|
|
a switch for calibration method selection. If object-releasing method to be used, pass in the
|
|
parameter in the range of [1, objectPoints[0].size()-2], otherwise a value out of this range will
|
|
make standard calibration method selected. Usually the top-right corner point of the calibration
|
|
board grid is recommended to be fixed when object-releasing method being utilized. According to
|
|
\cite strobl2011iccv, two other points are also fixed. In this implementation, objectPoints[0].front
|
|
and objectPoints[0].back.z are used. With object-releasing method, accurate rvecs, tvecs and
|
|
newObjPoints are only possible if coordinates of these three fixed points are accurate enough.</dd>
|
|
<dd><code>cameraMatrix</code> - Output 3x3 floating-point camera matrix. See #calibrateCamera for details.</dd>
|
|
<dd><code>distCoeffs</code> - Output vector of distortion coefficients. See #calibrateCamera for details.</dd>
|
|
<dd><code>rvecs</code> - Output vector of rotation vectors estimated for each pattern view. See #calibrateCamera
|
|
for details.</dd>
|
|
<dd><code>tvecs</code> - Output vector of translation vectors estimated for each pattern view.</dd>
|
|
<dd><code>newObjPoints</code> - The updated output vector of calibration pattern points. The coordinates might
|
|
be scaled based on three fixed points. The returned coordinates are accurate only if the above
|
|
mentioned three fixed points are accurate. If not needed, noArray() can be passed in. This parameter
|
|
is ignored with standard calibration method.</dd>
|
|
<dd><code>stdDeviationsIntrinsics</code> - Output vector of standard deviations estimated for intrinsic parameters.
|
|
See #calibrateCamera for details.</dd>
|
|
<dd><code>stdDeviationsExtrinsics</code> - Output vector of standard deviations estimated for extrinsic parameters.
|
|
See #calibrateCamera for details.</dd>
|
|
<dd><code>stdDeviationsObjPoints</code> - Output vector of standard deviations estimated for refined coordinates
|
|
of calibration pattern points. It has the same size and order as objectPoints[0] vector. This
|
|
parameter is ignored with standard calibration method.</dd>
|
|
<dd><code>perViewErrors</code> - Output vector of the RMS re-projection error estimated for each pattern view.</dd>
|
|
<dd><code>flags</code> - Different flags that may be zero or a combination of some predefined values. See
|
|
#calibrateCamera for details. If the method of releasing object is used, the calibration time may
|
|
be much longer. CALIB_USE_QR or CALIB_USE_LU could be used for faster calibration with potentially
|
|
less precise and less stable in some rare cases.</dd>
|
|
<dd><code>criteria</code> - Termination criteria for the iterative optimization algorithm.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>the overall RMS re-projection error.
|
|
|
|
The function estimates the intrinsic camera parameters and extrinsic parameters for each of the
|
|
views. The algorithm is based on CITE: Zhang2000, CITE: BouguetMCT and CITE: strobl2011iccv. See
|
|
#calibrateCamera for other detailed explanations.
|
|
SEE:
|
|
calibrateCamera, findChessboardCorners, solvePnP, initCameraMatrix2D, stereoCalibrate, undistort</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="calibrateCameraROExtended(java.util.List,java.util.List,org.opencv.core.Size,int,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int)">
|
|
<h3>calibrateCameraROExtended</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">double</span> <span class="element-name">calibrateCameraROExtended</span><wbr><span class="parameters">(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
int iFixedPoint,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> newObjPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> stdDeviationsIntrinsics,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> stdDeviationsExtrinsics,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> stdDeviationsObjPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> perViewErrors,
|
|
int flags)</span></div>
|
|
<div class="block">Finds the camera intrinsic and extrinsic parameters from several views of a calibration pattern.
|
|
|
|
This function is an extension of #calibrateCamera with the method of releasing object which was
|
|
proposed in CITE: strobl2011iccv. In many common cases with inaccurate, unmeasured, roughly planar
|
|
targets (calibration plates), this method can dramatically improve the precision of the estimated
|
|
camera parameters. Both the object-releasing method and standard method are supported by this
|
|
function. Use the parameter <b>iFixedPoint</b> for method selection. In the internal implementation,
|
|
#calibrateCamera is a wrapper for this function.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - Vector of vectors of calibration pattern points in the calibration pattern
|
|
coordinate space. See #calibrateCamera for details. If the method of releasing object to be used,
|
|
the identical calibration board must be used in each view and it must be fully visible, and all
|
|
objectPoints[i] must be the same and all points should be roughly close to a plane. <b>The calibration
|
|
target has to be rigid, or at least static if the camera (rather than the calibration target) is
|
|
shifted for grabbing images.</b></dd>
|
|
<dd><code>imagePoints</code> - Vector of vectors of the projections of calibration pattern points. See
|
|
#calibrateCamera for details.</dd>
|
|
<dd><code>imageSize</code> - Size of the image used only to initialize the intrinsic camera matrix.</dd>
|
|
<dd><code>iFixedPoint</code> - The index of the 3D object point in objectPoints[0] to be fixed. It also acts as
|
|
a switch for calibration method selection. If object-releasing method to be used, pass in the
|
|
parameter in the range of [1, objectPoints[0].size()-2], otherwise a value out of this range will
|
|
make standard calibration method selected. Usually the top-right corner point of the calibration
|
|
board grid is recommended to be fixed when object-releasing method being utilized. According to
|
|
\cite strobl2011iccv, two other points are also fixed. In this implementation, objectPoints[0].front
|
|
and objectPoints[0].back.z are used. With object-releasing method, accurate rvecs, tvecs and
|
|
newObjPoints are only possible if coordinates of these three fixed points are accurate enough.</dd>
|
|
<dd><code>cameraMatrix</code> - Output 3x3 floating-point camera matrix. See #calibrateCamera for details.</dd>
|
|
<dd><code>distCoeffs</code> - Output vector of distortion coefficients. See #calibrateCamera for details.</dd>
|
|
<dd><code>rvecs</code> - Output vector of rotation vectors estimated for each pattern view. See #calibrateCamera
|
|
for details.</dd>
|
|
<dd><code>tvecs</code> - Output vector of translation vectors estimated for each pattern view.</dd>
|
|
<dd><code>newObjPoints</code> - The updated output vector of calibration pattern points. The coordinates might
|
|
be scaled based on three fixed points. The returned coordinates are accurate only if the above
|
|
mentioned three fixed points are accurate. If not needed, noArray() can be passed in. This parameter
|
|
is ignored with standard calibration method.</dd>
|
|
<dd><code>stdDeviationsIntrinsics</code> - Output vector of standard deviations estimated for intrinsic parameters.
|
|
See #calibrateCamera for details.</dd>
|
|
<dd><code>stdDeviationsExtrinsics</code> - Output vector of standard deviations estimated for extrinsic parameters.
|
|
See #calibrateCamera for details.</dd>
|
|
<dd><code>stdDeviationsObjPoints</code> - Output vector of standard deviations estimated for refined coordinates
|
|
of calibration pattern points. It has the same size and order as objectPoints[0] vector. This
|
|
parameter is ignored with standard calibration method.</dd>
|
|
<dd><code>perViewErrors</code> - Output vector of the RMS re-projection error estimated for each pattern view.</dd>
|
|
<dd><code>flags</code> - Different flags that may be zero or a combination of some predefined values. See
|
|
#calibrateCamera for details. If the method of releasing object is used, the calibration time may
|
|
be much longer. CALIB_USE_QR or CALIB_USE_LU could be used for faster calibration with potentially
|
|
less precise and less stable in some rare cases.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>the overall RMS re-projection error.
|
|
|
|
The function estimates the intrinsic camera parameters and extrinsic parameters for each of the
|
|
views. The algorithm is based on CITE: Zhang2000, CITE: BouguetMCT and CITE: strobl2011iccv. See
|
|
#calibrateCamera for other detailed explanations.
|
|
SEE:
|
|
calibrateCamera, findChessboardCorners, solvePnP, initCameraMatrix2D, stereoCalibrate, undistort</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="calibrateCameraROExtended(java.util.List,java.util.List,org.opencv.core.Size,int,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>calibrateCameraROExtended</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">double</span> <span class="element-name">calibrateCameraROExtended</span><wbr><span class="parameters">(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
int iFixedPoint,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> newObjPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> stdDeviationsIntrinsics,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> stdDeviationsExtrinsics,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> stdDeviationsObjPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> perViewErrors)</span></div>
|
|
<div class="block">Finds the camera intrinsic and extrinsic parameters from several views of a calibration pattern.
|
|
|
|
This function is an extension of #calibrateCamera with the method of releasing object which was
|
|
proposed in CITE: strobl2011iccv. In many common cases with inaccurate, unmeasured, roughly planar
|
|
targets (calibration plates), this method can dramatically improve the precision of the estimated
|
|
camera parameters. Both the object-releasing method and standard method are supported by this
|
|
function. Use the parameter <b>iFixedPoint</b> for method selection. In the internal implementation,
|
|
#calibrateCamera is a wrapper for this function.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - Vector of vectors of calibration pattern points in the calibration pattern
|
|
coordinate space. See #calibrateCamera for details. If the method of releasing object to be used,
|
|
the identical calibration board must be used in each view and it must be fully visible, and all
|
|
objectPoints[i] must be the same and all points should be roughly close to a plane. <b>The calibration
|
|
target has to be rigid, or at least static if the camera (rather than the calibration target) is
|
|
shifted for grabbing images.</b></dd>
|
|
<dd><code>imagePoints</code> - Vector of vectors of the projections of calibration pattern points. See
|
|
#calibrateCamera for details.</dd>
|
|
<dd><code>imageSize</code> - Size of the image used only to initialize the intrinsic camera matrix.</dd>
|
|
<dd><code>iFixedPoint</code> - The index of the 3D object point in objectPoints[0] to be fixed. It also acts as
|
|
a switch for calibration method selection. If object-releasing method to be used, pass in the
|
|
parameter in the range of [1, objectPoints[0].size()-2], otherwise a value out of this range will
|
|
make standard calibration method selected. Usually the top-right corner point of the calibration
|
|
board grid is recommended to be fixed when object-releasing method being utilized. According to
|
|
\cite strobl2011iccv, two other points are also fixed. In this implementation, objectPoints[0].front
|
|
and objectPoints[0].back.z are used. With object-releasing method, accurate rvecs, tvecs and
|
|
newObjPoints are only possible if coordinates of these three fixed points are accurate enough.</dd>
|
|
<dd><code>cameraMatrix</code> - Output 3x3 floating-point camera matrix. See #calibrateCamera for details.</dd>
|
|
<dd><code>distCoeffs</code> - Output vector of distortion coefficients. See #calibrateCamera for details.</dd>
|
|
<dd><code>rvecs</code> - Output vector of rotation vectors estimated for each pattern view. See #calibrateCamera
|
|
for details.</dd>
|
|
<dd><code>tvecs</code> - Output vector of translation vectors estimated for each pattern view.</dd>
|
|
<dd><code>newObjPoints</code> - The updated output vector of calibration pattern points. The coordinates might
|
|
be scaled based on three fixed points. The returned coordinates are accurate only if the above
|
|
mentioned three fixed points are accurate. If not needed, noArray() can be passed in. This parameter
|
|
is ignored with standard calibration method.</dd>
|
|
<dd><code>stdDeviationsIntrinsics</code> - Output vector of standard deviations estimated for intrinsic parameters.
|
|
See #calibrateCamera for details.</dd>
|
|
<dd><code>stdDeviationsExtrinsics</code> - Output vector of standard deviations estimated for extrinsic parameters.
|
|
See #calibrateCamera for details.</dd>
|
|
<dd><code>stdDeviationsObjPoints</code> - Output vector of standard deviations estimated for refined coordinates
|
|
of calibration pattern points. It has the same size and order as objectPoints[0] vector. This
|
|
parameter is ignored with standard calibration method.</dd>
|
|
<dd><code>perViewErrors</code> - Output vector of the RMS re-projection error estimated for each pattern view.
|
|
#calibrateCamera for details. If the method of releasing object is used, the calibration time may
|
|
be much longer. CALIB_USE_QR or CALIB_USE_LU could be used for faster calibration with potentially
|
|
less precise and less stable in some rare cases.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>the overall RMS re-projection error.
|
|
|
|
The function estimates the intrinsic camera parameters and extrinsic parameters for each of the
|
|
views. The algorithm is based on CITE: Zhang2000, CITE: BouguetMCT and CITE: strobl2011iccv. See
|
|
#calibrateCamera for other detailed explanations.
|
|
SEE:
|
|
calibrateCamera, findChessboardCorners, solvePnP, initCameraMatrix2D, stereoCalibrate, undistort</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="calibrateCameraRO(java.util.List,java.util.List,org.opencv.core.Size,int,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List,org.opencv.core.Mat,int,org.opencv.core.TermCriteria)">
|
|
<h3>calibrateCameraRO</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">double</span> <span class="element-name">calibrateCameraRO</span><wbr><span class="parameters">(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
int iFixedPoint,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> newObjPoints,
|
|
int flags,
|
|
<a href="../core/TermCriteria.html" title="class in org.opencv.core">TermCriteria</a> criteria)</span></div>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="calibrateCameraRO(java.util.List,java.util.List,org.opencv.core.Size,int,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List,org.opencv.core.Mat,int)">
|
|
<h3>calibrateCameraRO</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">double</span> <span class="element-name">calibrateCameraRO</span><wbr><span class="parameters">(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
int iFixedPoint,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> newObjPoints,
|
|
int flags)</span></div>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="calibrateCameraRO(java.util.List,java.util.List,org.opencv.core.Size,int,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List,org.opencv.core.Mat)">
|
|
<h3>calibrateCameraRO</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">double</span> <span class="element-name">calibrateCameraRO</span><wbr><span class="parameters">(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
int iFixedPoint,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> newObjPoints)</span></div>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="calibrationMatrixValues(org.opencv.core.Mat,org.opencv.core.Size,double,double,double[],double[],double[],org.opencv.core.Point,double[])">
|
|
<h3>calibrationMatrixValues</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">calibrationMatrixValues</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
double apertureWidth,
|
|
double apertureHeight,
|
|
double[] fovx,
|
|
double[] fovy,
|
|
double[] focalLength,
|
|
<a href="../core/Point.html" title="class in org.opencv.core">Point</a> principalPoint,
|
|
double[] aspectRatio)</span></div>
|
|
<div class="block">Computes useful camera characteristics from the camera intrinsic matrix.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>cameraMatrix</code> - Input camera intrinsic matrix that can be estimated by #calibrateCamera or
|
|
#stereoCalibrate .</dd>
|
|
<dd><code>imageSize</code> - Input image size in pixels.</dd>
|
|
<dd><code>apertureWidth</code> - Physical width in mm of the sensor.</dd>
|
|
<dd><code>apertureHeight</code> - Physical height in mm of the sensor.</dd>
|
|
<dd><code>fovx</code> - Output field of view in degrees along the horizontal sensor axis.</dd>
|
|
<dd><code>fovy</code> - Output field of view in degrees along the vertical sensor axis.</dd>
|
|
<dd><code>focalLength</code> - Focal length of the lens in mm.</dd>
|
|
<dd><code>principalPoint</code> - Principal point in mm.</dd>
|
|
<dd><code>aspectRatio</code> - \(f_y/f_x\)
|
|
|
|
The function computes various useful camera characteristics from the previously estimated camera
|
|
matrix.
|
|
|
|
<b>Note:</b>
|
|
Do keep in mind that the unity measure 'mm' stands for whatever unit of measure one chooses for
|
|
the chessboard pitch (it can thus be any value).</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="stereoCalibrateExtended(java.util.List,java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List,org.opencv.core.Mat,int,org.opencv.core.TermCriteria)">
|
|
<h3>stereoCalibrateExtended</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">double</span> <span class="element-name">stereoCalibrateExtended</span><wbr><span class="parameters">(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints1,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs2,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> T,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> E,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> F,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> perViewErrors,
|
|
int flags,
|
|
<a href="../core/TermCriteria.html" title="class in org.opencv.core">TermCriteria</a> criteria)</span></div>
|
|
<div class="block">Calibrates a stereo camera set up. This function finds the intrinsic parameters
|
|
for each of the two cameras and the extrinsic parameters between the two cameras.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - Vector of vectors of the calibration pattern points. The same structure as
|
|
in REF: calibrateCamera. For each pattern view, both cameras need to see the same object
|
|
points. Therefore, objectPoints.size(), imagePoints1.size(), and imagePoints2.size() need to be
|
|
equal as well as objectPoints[i].size(), imagePoints1[i].size(), and imagePoints2[i].size() need to
|
|
be equal for each i.</dd>
|
|
<dd><code>imagePoints1</code> - Vector of vectors of the projections of the calibration pattern points,
|
|
observed by the first camera. The same structure as in REF: calibrateCamera.</dd>
|
|
<dd><code>imagePoints2</code> - Vector of vectors of the projections of the calibration pattern points,
|
|
observed by the second camera. The same structure as in REF: calibrateCamera.</dd>
|
|
<dd><code>cameraMatrix1</code> - Input/output camera intrinsic matrix for the first camera, the same as in
|
|
REF: calibrateCamera. Furthermore, for the stereo case, additional flags may be used, see below.</dd>
|
|
<dd><code>distCoeffs1</code> - Input/output vector of distortion coefficients, the same as in
|
|
REF: calibrateCamera.</dd>
|
|
<dd><code>cameraMatrix2</code> - Input/output second camera intrinsic matrix for the second camera. See description for
|
|
cameraMatrix1.</dd>
|
|
<dd><code>distCoeffs2</code> - Input/output lens distortion coefficients for the second camera. See
|
|
description for distCoeffs1.</dd>
|
|
<dd><code>imageSize</code> - Size of the image used only to initialize the camera intrinsic matrices.</dd>
|
|
<dd><code>R</code> - Output rotation matrix. Together with the translation vector T, this matrix brings
|
|
points given in the first camera's coordinate system to points in the second camera's
|
|
coordinate system. In more technical terms, the tuple of R and T performs a change of basis
|
|
from the first camera's coordinate system to the second camera's coordinate system. Due to its
|
|
duality, this tuple is equivalent to the position of the first camera with respect to the
|
|
second camera coordinate system.</dd>
|
|
<dd><code>T</code> - Output translation vector, see description above.</dd>
|
|
<dd><code>E</code> - Output essential matrix.</dd>
|
|
<dd><code>F</code> - Output fundamental matrix.</dd>
|
|
<dd><code>rvecs</code> - Output vector of rotation vectors ( REF: Rodrigues ) estimated for each pattern view in the
|
|
coordinate system of the first camera of the stereo pair (e.g. std::vector<cv::Mat>). More in detail, each
|
|
i-th rotation vector together with the corresponding i-th translation vector (see the next output parameter
|
|
description) brings the calibration pattern from the object coordinate space (in which object points are
|
|
specified) to the camera coordinate space of the first camera of the stereo pair. In more technical terms,
|
|
the tuple of the i-th rotation and translation vector performs a change of basis from object coordinate space
|
|
to camera coordinate space of the first camera of the stereo pair.</dd>
|
|
<dd><code>tvecs</code> - Output vector of translation vectors estimated for each pattern view, see parameter description
|
|
of previous output parameter ( rvecs ).</dd>
|
|
<dd><code>perViewErrors</code> - Output vector of the RMS re-projection error estimated for each pattern view.</dd>
|
|
<dd><code>flags</code> - Different flags that may be zero or a combination of the following values:
|
|
<ul>
|
|
<li>
|
|
REF: CALIB_FIX_INTRINSIC Fix cameraMatrix? and distCoeffs? so that only R, T, E, and F
|
|
matrices are estimated.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_USE_INTRINSIC_GUESS Optimize some or all of the intrinsic parameters
|
|
according to the specified flags. Initial values are provided by the user.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_USE_EXTRINSIC_GUESS R and T contain valid initial values that are optimized further.
|
|
Otherwise R and T are initialized to the median value of the pattern views (each dimension separately).
|
|
</li>
|
|
<li>
|
|
REF: CALIB_FIX_PRINCIPAL_POINT Fix the principal points during the optimization.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_FIX_FOCAL_LENGTH Fix \(f^{(j)}_x\) and \(f^{(j)}_y\) .
|
|
</li>
|
|
<li>
|
|
REF: CALIB_FIX_ASPECT_RATIO Optimize \(f^{(j)}_y\) . Fix the ratio \(f^{(j)}_x/f^{(j)}_y\)
|
|
.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_SAME_FOCAL_LENGTH Enforce \(f^{(0)}_x=f^{(1)}_x\) and \(f^{(0)}_y=f^{(1)}_y\) .
|
|
</li>
|
|
<li>
|
|
REF: CALIB_ZERO_TANGENT_DIST Set tangential distortion coefficients for each camera to
|
|
zeros and fix there.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_FIX_K1,..., REF: CALIB_FIX_K6 Do not change the corresponding radial
|
|
distortion coefficient during the optimization. If REF: CALIB_USE_INTRINSIC_GUESS is set,
|
|
the coefficient from the supplied distCoeffs matrix is used. Otherwise, it is set to 0.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_RATIONAL_MODEL Enable coefficients k4, k5, and k6. To provide the backward
|
|
compatibility, this extra flag should be explicitly specified to make the calibration
|
|
function use the rational model and return 8 coefficients. If the flag is not set, the
|
|
function computes and returns only 5 distortion coefficients.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_THIN_PRISM_MODEL Coefficients s1, s2, s3 and s4 are enabled. To provide the
|
|
backward compatibility, this extra flag should be explicitly specified to make the
|
|
calibration function use the thin prism model and return 12 coefficients. If the flag is not
|
|
set, the function computes and returns only 5 distortion coefficients.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_FIX_S1_S2_S3_S4 The thin prism distortion coefficients are not changed during
|
|
the optimization. If REF: CALIB_USE_INTRINSIC_GUESS is set, the coefficient from the
|
|
supplied distCoeffs matrix is used. Otherwise, it is set to 0.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_TILTED_MODEL Coefficients tauX and tauY are enabled. To provide the
|
|
backward compatibility, this extra flag should be explicitly specified to make the
|
|
calibration function use the tilted sensor model and return 14 coefficients. If the flag is not
|
|
set, the function computes and returns only 5 distortion coefficients.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_FIX_TAUX_TAUY The coefficients of the tilted sensor model are not changed during
|
|
the optimization. If REF: CALIB_USE_INTRINSIC_GUESS is set, the coefficient from the
|
|
supplied distCoeffs matrix is used. Otherwise, it is set to 0.
|
|
</li>
|
|
</ul></dd>
|
|
<dd><code>criteria</code> - Termination criteria for the iterative optimization algorithm.
|
|
|
|
The function estimates the transformation between two cameras making a stereo pair. If one computes
|
|
the poses of an object relative to the first camera and to the second camera,
|
|
( \(R_1\),\(T_1\) ) and (\(R_2\),\(T_2\)), respectively, for a stereo camera where the
|
|
relative position and orientation between the two cameras are fixed, then those poses definitely
|
|
relate to each other. This means, if the relative position and orientation (\(R\),\(T\)) of the
|
|
two cameras is known, it is possible to compute (\(R_2\),\(T_2\)) when (\(R_1\),\(T_1\)) is
|
|
given. This is what the described function does. It computes (\(R\),\(T\)) such that:
|
|
|
|
\(R_2=R R_1\)
|
|
\(T_2=R T_1 + T.\)
|
|
|
|
Therefore, one can compute the coordinate representation of a 3D point for the second camera's
|
|
coordinate system when given the point's coordinate representation in the first camera's coordinate
|
|
system:
|
|
|
|
\(\begin{bmatrix}
|
|
X_2 \\
|
|
Y_2 \\
|
|
Z_2 \\
|
|
1
|
|
\end{bmatrix} = \begin{bmatrix}
|
|
R & T \\
|
|
0 & 1
|
|
\end{bmatrix} \begin{bmatrix}
|
|
X_1 \\
|
|
Y_1 \\
|
|
Z_1 \\
|
|
1
|
|
\end{bmatrix}.\)
|
|
|
|
|
|
Optionally, it computes the essential matrix E:
|
|
|
|
\(E= \vecthreethree{0}{-T_2}{T_1}{T_2}{0}{-T_0}{-T_1}{T_0}{0} R\)
|
|
|
|
where \(T_i\) are components of the translation vector \(T\) : \(T=[T_0, T_1, T_2]^T\) .
|
|
And the function can also compute the fundamental matrix F:
|
|
|
|
\(F = cameraMatrix2^{-T}\cdot E \cdot cameraMatrix1^{-1}\)
|
|
|
|
Besides the stereo-related information, the function can also perform a full calibration of each of
|
|
the two cameras. However, due to the high dimensionality of the parameter space and noise in the
|
|
input data, the function can diverge from the correct solution. If the intrinsic parameters can be
|
|
estimated with high accuracy for each of the cameras individually (for example, using
|
|
#calibrateCamera ), you are recommended to do so and then pass REF: CALIB_FIX_INTRINSIC flag to the
|
|
function along with the computed intrinsic parameters. Otherwise, if all the parameters are
|
|
estimated at once, it makes sense to restrict some parameters, for example, pass
|
|
REF: CALIB_SAME_FOCAL_LENGTH and REF: CALIB_ZERO_TANGENT_DIST flags, which is usually a
|
|
reasonable assumption.
|
|
|
|
Similarly to #calibrateCamera, the function minimizes the total re-projection error for all the
|
|
points in all the available views from both cameras. The function returns the final value of the
|
|
re-projection error.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="stereoCalibrateExtended(java.util.List,java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List,org.opencv.core.Mat,int)">
|
|
<h3>stereoCalibrateExtended</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">double</span> <span class="element-name">stereoCalibrateExtended</span><wbr><span class="parameters">(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints1,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs2,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> T,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> E,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> F,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> perViewErrors,
|
|
int flags)</span></div>
|
|
<div class="block">Calibrates a stereo camera set up. This function finds the intrinsic parameters
|
|
for each of the two cameras and the extrinsic parameters between the two cameras.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - Vector of vectors of the calibration pattern points. The same structure as
|
|
in REF: calibrateCamera. For each pattern view, both cameras need to see the same object
|
|
points. Therefore, objectPoints.size(), imagePoints1.size(), and imagePoints2.size() need to be
|
|
equal as well as objectPoints[i].size(), imagePoints1[i].size(), and imagePoints2[i].size() need to
|
|
be equal for each i.</dd>
|
|
<dd><code>imagePoints1</code> - Vector of vectors of the projections of the calibration pattern points,
|
|
observed by the first camera. The same structure as in REF: calibrateCamera.</dd>
|
|
<dd><code>imagePoints2</code> - Vector of vectors of the projections of the calibration pattern points,
|
|
observed by the second camera. The same structure as in REF: calibrateCamera.</dd>
|
|
<dd><code>cameraMatrix1</code> - Input/output camera intrinsic matrix for the first camera, the same as in
|
|
REF: calibrateCamera. Furthermore, for the stereo case, additional flags may be used, see below.</dd>
|
|
<dd><code>distCoeffs1</code> - Input/output vector of distortion coefficients, the same as in
|
|
REF: calibrateCamera.</dd>
|
|
<dd><code>cameraMatrix2</code> - Input/output second camera intrinsic matrix for the second camera. See description for
|
|
cameraMatrix1.</dd>
|
|
<dd><code>distCoeffs2</code> - Input/output lens distortion coefficients for the second camera. See
|
|
description for distCoeffs1.</dd>
|
|
<dd><code>imageSize</code> - Size of the image used only to initialize the camera intrinsic matrices.</dd>
|
|
<dd><code>R</code> - Output rotation matrix. Together with the translation vector T, this matrix brings
|
|
points given in the first camera's coordinate system to points in the second camera's
|
|
coordinate system. In more technical terms, the tuple of R and T performs a change of basis
|
|
from the first camera's coordinate system to the second camera's coordinate system. Due to its
|
|
duality, this tuple is equivalent to the position of the first camera with respect to the
|
|
second camera coordinate system.</dd>
|
|
<dd><code>T</code> - Output translation vector, see description above.</dd>
|
|
<dd><code>E</code> - Output essential matrix.</dd>
|
|
<dd><code>F</code> - Output fundamental matrix.</dd>
|
|
<dd><code>rvecs</code> - Output vector of rotation vectors ( REF: Rodrigues ) estimated for each pattern view in the
|
|
coordinate system of the first camera of the stereo pair (e.g. std::vector<cv::Mat>). More in detail, each
|
|
i-th rotation vector together with the corresponding i-th translation vector (see the next output parameter
|
|
description) brings the calibration pattern from the object coordinate space (in which object points are
|
|
specified) to the camera coordinate space of the first camera of the stereo pair. In more technical terms,
|
|
the tuple of the i-th rotation and translation vector performs a change of basis from object coordinate space
|
|
to camera coordinate space of the first camera of the stereo pair.</dd>
|
|
<dd><code>tvecs</code> - Output vector of translation vectors estimated for each pattern view, see parameter description
|
|
of previous output parameter ( rvecs ).</dd>
|
|
<dd><code>perViewErrors</code> - Output vector of the RMS re-projection error estimated for each pattern view.</dd>
|
|
<dd><code>flags</code> - Different flags that may be zero or a combination of the following values:
|
|
<ul>
|
|
<li>
|
|
REF: CALIB_FIX_INTRINSIC Fix cameraMatrix? and distCoeffs? so that only R, T, E, and F
|
|
matrices are estimated.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_USE_INTRINSIC_GUESS Optimize some or all of the intrinsic parameters
|
|
according to the specified flags. Initial values are provided by the user.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_USE_EXTRINSIC_GUESS R and T contain valid initial values that are optimized further.
|
|
Otherwise R and T are initialized to the median value of the pattern views (each dimension separately).
|
|
</li>
|
|
<li>
|
|
REF: CALIB_FIX_PRINCIPAL_POINT Fix the principal points during the optimization.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_FIX_FOCAL_LENGTH Fix \(f^{(j)}_x\) and \(f^{(j)}_y\) .
|
|
</li>
|
|
<li>
|
|
REF: CALIB_FIX_ASPECT_RATIO Optimize \(f^{(j)}_y\) . Fix the ratio \(f^{(j)}_x/f^{(j)}_y\)
|
|
.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_SAME_FOCAL_LENGTH Enforce \(f^{(0)}_x=f^{(1)}_x\) and \(f^{(0)}_y=f^{(1)}_y\) .
|
|
</li>
|
|
<li>
|
|
REF: CALIB_ZERO_TANGENT_DIST Set tangential distortion coefficients for each camera to
|
|
zeros and fix there.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_FIX_K1,..., REF: CALIB_FIX_K6 Do not change the corresponding radial
|
|
distortion coefficient during the optimization. If REF: CALIB_USE_INTRINSIC_GUESS is set,
|
|
the coefficient from the supplied distCoeffs matrix is used. Otherwise, it is set to 0.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_RATIONAL_MODEL Enable coefficients k4, k5, and k6. To provide the backward
|
|
compatibility, this extra flag should be explicitly specified to make the calibration
|
|
function use the rational model and return 8 coefficients. If the flag is not set, the
|
|
function computes and returns only 5 distortion coefficients.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_THIN_PRISM_MODEL Coefficients s1, s2, s3 and s4 are enabled. To provide the
|
|
backward compatibility, this extra flag should be explicitly specified to make the
|
|
calibration function use the thin prism model and return 12 coefficients. If the flag is not
|
|
set, the function computes and returns only 5 distortion coefficients.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_FIX_S1_S2_S3_S4 The thin prism distortion coefficients are not changed during
|
|
the optimization. If REF: CALIB_USE_INTRINSIC_GUESS is set, the coefficient from the
|
|
supplied distCoeffs matrix is used. Otherwise, it is set to 0.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_TILTED_MODEL Coefficients tauX and tauY are enabled. To provide the
|
|
backward compatibility, this extra flag should be explicitly specified to make the
|
|
calibration function use the tilted sensor model and return 14 coefficients. If the flag is not
|
|
set, the function computes and returns only 5 distortion coefficients.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_FIX_TAUX_TAUY The coefficients of the tilted sensor model are not changed during
|
|
the optimization. If REF: CALIB_USE_INTRINSIC_GUESS is set, the coefficient from the
|
|
supplied distCoeffs matrix is used. Otherwise, it is set to 0.
|
|
</li>
|
|
</ul>
|
|
|
|
The function estimates the transformation between two cameras making a stereo pair. If one computes
|
|
the poses of an object relative to the first camera and to the second camera,
|
|
( \(R_1\),\(T_1\) ) and (\(R_2\),\(T_2\)), respectively, for a stereo camera where the
|
|
relative position and orientation between the two cameras are fixed, then those poses definitely
|
|
relate to each other. This means, if the relative position and orientation (\(R\),\(T\)) of the
|
|
two cameras is known, it is possible to compute (\(R_2\),\(T_2\)) when (\(R_1\),\(T_1\)) is
|
|
given. This is what the described function does. It computes (\(R\),\(T\)) such that:
|
|
|
|
\(R_2=R R_1\)
|
|
\(T_2=R T_1 + T.\)
|
|
|
|
Therefore, one can compute the coordinate representation of a 3D point for the second camera's
|
|
coordinate system when given the point's coordinate representation in the first camera's coordinate
|
|
system:
|
|
|
|
\(\begin{bmatrix}
|
|
X_2 \\
|
|
Y_2 \\
|
|
Z_2 \\
|
|
1
|
|
\end{bmatrix} = \begin{bmatrix}
|
|
R & T \\
|
|
0 & 1
|
|
\end{bmatrix} \begin{bmatrix}
|
|
X_1 \\
|
|
Y_1 \\
|
|
Z_1 \\
|
|
1
|
|
\end{bmatrix}.\)
|
|
|
|
|
|
Optionally, it computes the essential matrix E:
|
|
|
|
\(E= \vecthreethree{0}{-T_2}{T_1}{T_2}{0}{-T_0}{-T_1}{T_0}{0} R\)
|
|
|
|
where \(T_i\) are components of the translation vector \(T\) : \(T=[T_0, T_1, T_2]^T\) .
|
|
And the function can also compute the fundamental matrix F:
|
|
|
|
\(F = cameraMatrix2^{-T}\cdot E \cdot cameraMatrix1^{-1}\)
|
|
|
|
Besides the stereo-related information, the function can also perform a full calibration of each of
|
|
the two cameras. However, due to the high dimensionality of the parameter space and noise in the
|
|
input data, the function can diverge from the correct solution. If the intrinsic parameters can be
|
|
estimated with high accuracy for each of the cameras individually (for example, using
|
|
#calibrateCamera ), you are recommended to do so and then pass REF: CALIB_FIX_INTRINSIC flag to the
|
|
function along with the computed intrinsic parameters. Otherwise, if all the parameters are
|
|
estimated at once, it makes sense to restrict some parameters, for example, pass
|
|
REF: CALIB_SAME_FOCAL_LENGTH and REF: CALIB_ZERO_TANGENT_DIST flags, which is usually a
|
|
reasonable assumption.
|
|
|
|
Similarly to #calibrateCamera, the function minimizes the total re-projection error for all the
|
|
points in all the available views from both cameras. The function returns the final value of the
|
|
re-projection error.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="stereoCalibrateExtended(java.util.List,java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List,org.opencv.core.Mat)">
|
|
<h3>stereoCalibrateExtended</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">double</span> <span class="element-name">stereoCalibrateExtended</span><wbr><span class="parameters">(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints1,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs2,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> T,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> E,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> F,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> perViewErrors)</span></div>
|
|
<div class="block">Calibrates a stereo camera set up. This function finds the intrinsic parameters
|
|
for each of the two cameras and the extrinsic parameters between the two cameras.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - Vector of vectors of the calibration pattern points. The same structure as
|
|
in REF: calibrateCamera. For each pattern view, both cameras need to see the same object
|
|
points. Therefore, objectPoints.size(), imagePoints1.size(), and imagePoints2.size() need to be
|
|
equal as well as objectPoints[i].size(), imagePoints1[i].size(), and imagePoints2[i].size() need to
|
|
be equal for each i.</dd>
|
|
<dd><code>imagePoints1</code> - Vector of vectors of the projections of the calibration pattern points,
|
|
observed by the first camera. The same structure as in REF: calibrateCamera.</dd>
|
|
<dd><code>imagePoints2</code> - Vector of vectors of the projections of the calibration pattern points,
|
|
observed by the second camera. The same structure as in REF: calibrateCamera.</dd>
|
|
<dd><code>cameraMatrix1</code> - Input/output camera intrinsic matrix for the first camera, the same as in
|
|
REF: calibrateCamera. Furthermore, for the stereo case, additional flags may be used, see below.</dd>
|
|
<dd><code>distCoeffs1</code> - Input/output vector of distortion coefficients, the same as in
|
|
REF: calibrateCamera.</dd>
|
|
<dd><code>cameraMatrix2</code> - Input/output second camera intrinsic matrix for the second camera. See description for
|
|
cameraMatrix1.</dd>
|
|
<dd><code>distCoeffs2</code> - Input/output lens distortion coefficients for the second camera. See
|
|
description for distCoeffs1.</dd>
|
|
<dd><code>imageSize</code> - Size of the image used only to initialize the camera intrinsic matrices.</dd>
|
|
<dd><code>R</code> - Output rotation matrix. Together with the translation vector T, this matrix brings
|
|
points given in the first camera's coordinate system to points in the second camera's
|
|
coordinate system. In more technical terms, the tuple of R and T performs a change of basis
|
|
from the first camera's coordinate system to the second camera's coordinate system. Due to its
|
|
duality, this tuple is equivalent to the position of the first camera with respect to the
|
|
second camera coordinate system.</dd>
|
|
<dd><code>T</code> - Output translation vector, see description above.</dd>
|
|
<dd><code>E</code> - Output essential matrix.</dd>
|
|
<dd><code>F</code> - Output fundamental matrix.</dd>
|
|
<dd><code>rvecs</code> - Output vector of rotation vectors ( REF: Rodrigues ) estimated for each pattern view in the
|
|
coordinate system of the first camera of the stereo pair (e.g. std::vector<cv::Mat>). More in detail, each
|
|
i-th rotation vector together with the corresponding i-th translation vector (see the next output parameter
|
|
description) brings the calibration pattern from the object coordinate space (in which object points are
|
|
specified) to the camera coordinate space of the first camera of the stereo pair. In more technical terms,
|
|
the tuple of the i-th rotation and translation vector performs a change of basis from object coordinate space
|
|
to camera coordinate space of the first camera of the stereo pair.</dd>
|
|
<dd><code>tvecs</code> - Output vector of translation vectors estimated for each pattern view, see parameter description
|
|
of previous output parameter ( rvecs ).</dd>
|
|
<dd><code>perViewErrors</code> - Output vector of the RMS re-projection error estimated for each pattern view.
|
|
<ul>
|
|
<li>
|
|
REF: CALIB_FIX_INTRINSIC Fix cameraMatrix? and distCoeffs? so that only R, T, E, and F
|
|
matrices are estimated.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_USE_INTRINSIC_GUESS Optimize some or all of the intrinsic parameters
|
|
according to the specified flags. Initial values are provided by the user.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_USE_EXTRINSIC_GUESS R and T contain valid initial values that are optimized further.
|
|
Otherwise R and T are initialized to the median value of the pattern views (each dimension separately).
|
|
</li>
|
|
<li>
|
|
REF: CALIB_FIX_PRINCIPAL_POINT Fix the principal points during the optimization.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_FIX_FOCAL_LENGTH Fix \(f^{(j)}_x\) and \(f^{(j)}_y\) .
|
|
</li>
|
|
<li>
|
|
REF: CALIB_FIX_ASPECT_RATIO Optimize \(f^{(j)}_y\) . Fix the ratio \(f^{(j)}_x/f^{(j)}_y\)
|
|
.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_SAME_FOCAL_LENGTH Enforce \(f^{(0)}_x=f^{(1)}_x\) and \(f^{(0)}_y=f^{(1)}_y\) .
|
|
</li>
|
|
<li>
|
|
REF: CALIB_ZERO_TANGENT_DIST Set tangential distortion coefficients for each camera to
|
|
zeros and fix there.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_FIX_K1,..., REF: CALIB_FIX_K6 Do not change the corresponding radial
|
|
distortion coefficient during the optimization. If REF: CALIB_USE_INTRINSIC_GUESS is set,
|
|
the coefficient from the supplied distCoeffs matrix is used. Otherwise, it is set to 0.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_RATIONAL_MODEL Enable coefficients k4, k5, and k6. To provide the backward
|
|
compatibility, this extra flag should be explicitly specified to make the calibration
|
|
function use the rational model and return 8 coefficients. If the flag is not set, the
|
|
function computes and returns only 5 distortion coefficients.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_THIN_PRISM_MODEL Coefficients s1, s2, s3 and s4 are enabled. To provide the
|
|
backward compatibility, this extra flag should be explicitly specified to make the
|
|
calibration function use the thin prism model and return 12 coefficients. If the flag is not
|
|
set, the function computes and returns only 5 distortion coefficients.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_FIX_S1_S2_S3_S4 The thin prism distortion coefficients are not changed during
|
|
the optimization. If REF: CALIB_USE_INTRINSIC_GUESS is set, the coefficient from the
|
|
supplied distCoeffs matrix is used. Otherwise, it is set to 0.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_TILTED_MODEL Coefficients tauX and tauY are enabled. To provide the
|
|
backward compatibility, this extra flag should be explicitly specified to make the
|
|
calibration function use the tilted sensor model and return 14 coefficients. If the flag is not
|
|
set, the function computes and returns only 5 distortion coefficients.
|
|
</li>
|
|
<li>
|
|
REF: CALIB_FIX_TAUX_TAUY The coefficients of the tilted sensor model are not changed during
|
|
the optimization. If REF: CALIB_USE_INTRINSIC_GUESS is set, the coefficient from the
|
|
supplied distCoeffs matrix is used. Otherwise, it is set to 0.
|
|
</li>
|
|
</ul>
|
|
|
|
The function estimates the transformation between two cameras making a stereo pair. If one computes
|
|
the poses of an object relative to the first camera and to the second camera,
|
|
( \(R_1\),\(T_1\) ) and (\(R_2\),\(T_2\)), respectively, for a stereo camera where the
|
|
relative position and orientation between the two cameras are fixed, then those poses definitely
|
|
relate to each other. This means, if the relative position and orientation (\(R\),\(T\)) of the
|
|
two cameras is known, it is possible to compute (\(R_2\),\(T_2\)) when (\(R_1\),\(T_1\)) is
|
|
given. This is what the described function does. It computes (\(R\),\(T\)) such that:
|
|
|
|
\(R_2=R R_1\)
|
|
\(T_2=R T_1 + T.\)
|
|
|
|
Therefore, one can compute the coordinate representation of a 3D point for the second camera's
|
|
coordinate system when given the point's coordinate representation in the first camera's coordinate
|
|
system:
|
|
|
|
\(\begin{bmatrix}
|
|
X_2 \\
|
|
Y_2 \\
|
|
Z_2 \\
|
|
1
|
|
\end{bmatrix} = \begin{bmatrix}
|
|
R & T \\
|
|
0 & 1
|
|
\end{bmatrix} \begin{bmatrix}
|
|
X_1 \\
|
|
Y_1 \\
|
|
Z_1 \\
|
|
1
|
|
\end{bmatrix}.\)
|
|
|
|
|
|
Optionally, it computes the essential matrix E:
|
|
|
|
\(E= \vecthreethree{0}{-T_2}{T_1}{T_2}{0}{-T_0}{-T_1}{T_0}{0} R\)
|
|
|
|
where \(T_i\) are components of the translation vector \(T\) : \(T=[T_0, T_1, T_2]^T\) .
|
|
And the function can also compute the fundamental matrix F:
|
|
|
|
\(F = cameraMatrix2^{-T}\cdot E \cdot cameraMatrix1^{-1}\)
|
|
|
|
Besides the stereo-related information, the function can also perform a full calibration of each of
|
|
the two cameras. However, due to the high dimensionality of the parameter space and noise in the
|
|
input data, the function can diverge from the correct solution. If the intrinsic parameters can be
|
|
estimated with high accuracy for each of the cameras individually (for example, using
|
|
#calibrateCamera ), you are recommended to do so and then pass REF: CALIB_FIX_INTRINSIC flag to the
|
|
function along with the computed intrinsic parameters. Otherwise, if all the parameters are
|
|
estimated at once, it makes sense to restrict some parameters, for example, pass
|
|
REF: CALIB_SAME_FOCAL_LENGTH and REF: CALIB_ZERO_TANGENT_DIST flags, which is usually a
|
|
reasonable assumption.
|
|
|
|
Similarly to #calibrateCamera, the function minimizes the total re-projection error for all the
|
|
points in all the available views from both cameras. The function returns the final value of the
|
|
re-projection error.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="stereoCalibrate(java.util.List,java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,org.opencv.core.TermCriteria)">
|
|
<h3>stereoCalibrate</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">double</span> <span class="element-name">stereoCalibrate</span><wbr><span class="parameters">(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints1,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs2,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> T,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> E,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> F,
|
|
int flags,
|
|
<a href="../core/TermCriteria.html" title="class in org.opencv.core">TermCriteria</a> criteria)</span></div>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="stereoCalibrate(java.util.List,java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int)">
|
|
<h3>stereoCalibrate</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">double</span> <span class="element-name">stereoCalibrate</span><wbr><span class="parameters">(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints1,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs2,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> T,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> E,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> F,
|
|
int flags)</span></div>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="stereoCalibrate(java.util.List,java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>stereoCalibrate</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">double</span> <span class="element-name">stereoCalibrate</span><wbr><span class="parameters">(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints1,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs2,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> T,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> E,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> F)</span></div>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="stereoCalibrate(java.util.List,java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,org.opencv.core.TermCriteria)">
|
|
<h3>stereoCalibrate</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">double</span> <span class="element-name">stereoCalibrate</span><wbr><span class="parameters">(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints1,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs2,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> T,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> E,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> F,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> perViewErrors,
|
|
int flags,
|
|
<a href="../core/TermCriteria.html" title="class in org.opencv.core">TermCriteria</a> criteria)</span></div>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="stereoCalibrate(java.util.List,java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int)">
|
|
<h3>stereoCalibrate</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">double</span> <span class="element-name">stereoCalibrate</span><wbr><span class="parameters">(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints1,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs2,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> T,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> E,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> F,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> perViewErrors,
|
|
int flags)</span></div>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="stereoCalibrate(java.util.List,java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>stereoCalibrate</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">double</span> <span class="element-name">stereoCalibrate</span><wbr><span class="parameters">(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints1,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs2,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> T,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> E,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> F,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> perViewErrors)</span></div>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="stereoRectify(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,double,org.opencv.core.Size,org.opencv.core.Rect,org.opencv.core.Rect)">
|
|
<h3>stereoRectify</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">stereoRectify</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs2,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> T,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> Q,
|
|
int flags,
|
|
double alpha,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> newImageSize,
|
|
<a href="../core/Rect.html" title="class in org.opencv.core">Rect</a> validPixROI1,
|
|
<a href="../core/Rect.html" title="class in org.opencv.core">Rect</a> validPixROI2)</span></div>
|
|
<div class="block">Computes rectification transforms for each head of a calibrated stereo camera.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>cameraMatrix1</code> - First camera intrinsic matrix.</dd>
|
|
<dd><code>distCoeffs1</code> - First camera distortion parameters.</dd>
|
|
<dd><code>cameraMatrix2</code> - Second camera intrinsic matrix.</dd>
|
|
<dd><code>distCoeffs2</code> - Second camera distortion parameters.</dd>
|
|
<dd><code>imageSize</code> - Size of the image used for stereo calibration.</dd>
|
|
<dd><code>R</code> - Rotation matrix from the coordinate system of the first camera to the second camera,
|
|
see REF: stereoCalibrate.</dd>
|
|
<dd><code>T</code> - Translation vector from the coordinate system of the first camera to the second camera,
|
|
see REF: stereoCalibrate.</dd>
|
|
<dd><code>R1</code> - Output 3x3 rectification transform (rotation matrix) for the first camera. This matrix
|
|
brings points given in the unrectified first camera's coordinate system to points in the rectified
|
|
first camera's coordinate system. In more technical terms, it performs a change of basis from the
|
|
unrectified first camera's coordinate system to the rectified first camera's coordinate system.</dd>
|
|
<dd><code>R2</code> - Output 3x3 rectification transform (rotation matrix) for the second camera. This matrix
|
|
brings points given in the unrectified second camera's coordinate system to points in the rectified
|
|
second camera's coordinate system. In more technical terms, it performs a change of basis from the
|
|
unrectified second camera's coordinate system to the rectified second camera's coordinate system.</dd>
|
|
<dd><code>P1</code> - Output 3x4 projection matrix in the new (rectified) coordinate systems for the first
|
|
camera, i.e. it projects points given in the rectified first camera coordinate system into the
|
|
rectified first camera's image.</dd>
|
|
<dd><code>P2</code> - Output 3x4 projection matrix in the new (rectified) coordinate systems for the second
|
|
camera, i.e. it projects points given in the rectified first camera coordinate system into the
|
|
rectified second camera's image.</dd>
|
|
<dd><code>Q</code> - Output \(4 \times 4\) disparity-to-depth mapping matrix (see REF: reprojectImageTo3D).</dd>
|
|
<dd><code>flags</code> - Operation flags that may be zero or REF: CALIB_ZERO_DISPARITY . If the flag is set,
|
|
the function makes the principal points of each camera have the same pixel coordinates in the
|
|
rectified views. And if the flag is not set, the function may still shift the images in the
|
|
horizontal or vertical direction (depending on the orientation of epipolar lines) to maximize the
|
|
useful image area.</dd>
|
|
<dd><code>alpha</code> - Free scaling parameter. If it is -1 or absent, the function performs the default
|
|
scaling. Otherwise, the parameter should be between 0 and 1. alpha=0 means that the rectified
|
|
images are zoomed and shifted so that only valid pixels are visible (no black areas after
|
|
rectification). alpha=1 means that the rectified image is decimated and shifted so that all the
|
|
pixels from the original images from the cameras are retained in the rectified images (no source
|
|
image pixels are lost). Any intermediate value yields an intermediate result between
|
|
those two extreme cases.</dd>
|
|
<dd><code>newImageSize</code> - New image resolution after rectification. The same size should be passed to
|
|
#initUndistortRectifyMap (see the stereo_calib.cpp sample in OpenCV samples directory). When (0,0)
|
|
is passed (default), it is set to the original imageSize . Setting it to a larger value can help you
|
|
preserve details in the original image, especially when there is a big radial distortion.</dd>
|
|
<dd><code>validPixROI1</code> - Optional output rectangles inside the rectified images where all the pixels
|
|
are valid. If alpha=0 , the ROIs cover the whole images. Otherwise, they are likely to be smaller
|
|
(see the picture below).</dd>
|
|
<dd><code>validPixROI2</code> - Optional output rectangles inside the rectified images where all the pixels
|
|
are valid. If alpha=0 , the ROIs cover the whole images. Otherwise, they are likely to be smaller
|
|
(see the picture below).
|
|
|
|
The function computes the rotation matrices for each camera that (virtually) make both camera image
|
|
planes the same plane. Consequently, this makes all the epipolar lines parallel and thus simplifies
|
|
the dense stereo correspondence problem. The function takes the matrices computed by #stereoCalibrate
|
|
as input. As output, it provides two rotation matrices and also two projection matrices in the new
|
|
coordinates. The function distinguishes the following two cases:
|
|
|
|
<ul>
|
|
<li>
|
|
<b>Horizontal stereo</b>: the first and the second camera views are shifted relative to each other
|
|
mainly along the x-axis (with possible small vertical shift). In the rectified images, the
|
|
corresponding epipolar lines in the left and right cameras are horizontal and have the same
|
|
y-coordinate. P1 and P2 look like:
|
|
</li>
|
|
</ul>
|
|
|
|
\(\texttt{P1} = \begin{bmatrix}
|
|
f & 0 & cx_1 & 0 \\
|
|
0 & f & cy & 0 \\
|
|
0 & 0 & 1 & 0
|
|
\end{bmatrix}\)
|
|
|
|
\(\texttt{P2} = \begin{bmatrix}
|
|
f & 0 & cx_2 & T_x \cdot f \\
|
|
0 & f & cy & 0 \\
|
|
0 & 0 & 1 & 0
|
|
\end{bmatrix} ,\)
|
|
|
|
\(\texttt{Q} = \begin{bmatrix}
|
|
1 & 0 & 0 & -cx_1 \\
|
|
0 & 1 & 0 & -cy \\
|
|
0 & 0 & 0 & f \\
|
|
0 & 0 & -\frac{1}{T_x} & \frac{cx_1 - cx_2}{T_x}
|
|
\end{bmatrix} \)
|
|
|
|
where \(T_x\) is a horizontal shift between the cameras and \(cx_1=cx_2\) if
|
|
REF: CALIB_ZERO_DISPARITY is set.
|
|
|
|
<ul>
|
|
<li>
|
|
<b>Vertical stereo</b>: the first and the second camera views are shifted relative to each other
|
|
mainly in the vertical direction (and probably a bit in the horizontal direction too). The epipolar
|
|
lines in the rectified images are vertical and have the same x-coordinate. P1 and P2 look like:
|
|
</li>
|
|
</ul>
|
|
|
|
\(\texttt{P1} = \begin{bmatrix}
|
|
f & 0 & cx & 0 \\
|
|
0 & f & cy_1 & 0 \\
|
|
0 & 0 & 1 & 0
|
|
\end{bmatrix}\)
|
|
|
|
\(\texttt{P2} = \begin{bmatrix}
|
|
f & 0 & cx & 0 \\
|
|
0 & f & cy_2 & T_y \cdot f \\
|
|
0 & 0 & 1 & 0
|
|
\end{bmatrix},\)
|
|
|
|
\(\texttt{Q} = \begin{bmatrix}
|
|
1 & 0 & 0 & -cx \\
|
|
0 & 1 & 0 & -cy_1 \\
|
|
0 & 0 & 0 & f \\
|
|
0 & 0 & -\frac{1}{T_y} & \frac{cy_1 - cy_2}{T_y}
|
|
\end{bmatrix} \)
|
|
|
|
where \(T_y\) is a vertical shift between the cameras and \(cy_1=cy_2\) if
|
|
REF: CALIB_ZERO_DISPARITY is set.
|
|
|
|
As you can see, the first three columns of P1 and P2 will effectively be the new "rectified" camera
|
|
matrices. The matrices, together with R1 and R2 , can then be passed to #initUndistortRectifyMap to
|
|
initialize the rectification map for each camera.
|
|
|
|
See below the screenshot from the stereo_calib.cpp sample. Some red horizontal lines pass through
|
|
the corresponding image regions. This means that the images are well rectified, which is what most
|
|
stereo correspondence algorithms rely on. The green rectangles are roi1 and roi2 . You see that
|
|
their interiors are all valid pixels.
|
|
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="stereoRectify(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,double,org.opencv.core.Size,org.opencv.core.Rect)">
|
|
<h3>stereoRectify</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">stereoRectify</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs2,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> T,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> Q,
|
|
int flags,
|
|
double alpha,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> newImageSize,
|
|
<a href="../core/Rect.html" title="class in org.opencv.core">Rect</a> validPixROI1)</span></div>
|
|
<div class="block">Computes rectification transforms for each head of a calibrated stereo camera.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>cameraMatrix1</code> - First camera intrinsic matrix.</dd>
|
|
<dd><code>distCoeffs1</code> - First camera distortion parameters.</dd>
|
|
<dd><code>cameraMatrix2</code> - Second camera intrinsic matrix.</dd>
|
|
<dd><code>distCoeffs2</code> - Second camera distortion parameters.</dd>
|
|
<dd><code>imageSize</code> - Size of the image used for stereo calibration.</dd>
|
|
<dd><code>R</code> - Rotation matrix from the coordinate system of the first camera to the second camera,
|
|
see REF: stereoCalibrate.</dd>
|
|
<dd><code>T</code> - Translation vector from the coordinate system of the first camera to the second camera,
|
|
see REF: stereoCalibrate.</dd>
|
|
<dd><code>R1</code> - Output 3x3 rectification transform (rotation matrix) for the first camera. This matrix
|
|
brings points given in the unrectified first camera's coordinate system to points in the rectified
|
|
first camera's coordinate system. In more technical terms, it performs a change of basis from the
|
|
unrectified first camera's coordinate system to the rectified first camera's coordinate system.</dd>
|
|
<dd><code>R2</code> - Output 3x3 rectification transform (rotation matrix) for the second camera. This matrix
|
|
brings points given in the unrectified second camera's coordinate system to points in the rectified
|
|
second camera's coordinate system. In more technical terms, it performs a change of basis from the
|
|
unrectified second camera's coordinate system to the rectified second camera's coordinate system.</dd>
|
|
<dd><code>P1</code> - Output 3x4 projection matrix in the new (rectified) coordinate systems for the first
|
|
camera, i.e. it projects points given in the rectified first camera coordinate system into the
|
|
rectified first camera's image.</dd>
|
|
<dd><code>P2</code> - Output 3x4 projection matrix in the new (rectified) coordinate systems for the second
|
|
camera, i.e. it projects points given in the rectified first camera coordinate system into the
|
|
rectified second camera's image.</dd>
|
|
<dd><code>Q</code> - Output \(4 \times 4\) disparity-to-depth mapping matrix (see REF: reprojectImageTo3D).</dd>
|
|
<dd><code>flags</code> - Operation flags that may be zero or REF: CALIB_ZERO_DISPARITY . If the flag is set,
|
|
the function makes the principal points of each camera have the same pixel coordinates in the
|
|
rectified views. And if the flag is not set, the function may still shift the images in the
|
|
horizontal or vertical direction (depending on the orientation of epipolar lines) to maximize the
|
|
useful image area.</dd>
|
|
<dd><code>alpha</code> - Free scaling parameter. If it is -1 or absent, the function performs the default
|
|
scaling. Otherwise, the parameter should be between 0 and 1. alpha=0 means that the rectified
|
|
images are zoomed and shifted so that only valid pixels are visible (no black areas after
|
|
rectification). alpha=1 means that the rectified image is decimated and shifted so that all the
|
|
pixels from the original images from the cameras are retained in the rectified images (no source
|
|
image pixels are lost). Any intermediate value yields an intermediate result between
|
|
those two extreme cases.</dd>
|
|
<dd><code>newImageSize</code> - New image resolution after rectification. The same size should be passed to
|
|
#initUndistortRectifyMap (see the stereo_calib.cpp sample in OpenCV samples directory). When (0,0)
|
|
is passed (default), it is set to the original imageSize . Setting it to a larger value can help you
|
|
preserve details in the original image, especially when there is a big radial distortion.</dd>
|
|
<dd><code>validPixROI1</code> - Optional output rectangles inside the rectified images where all the pixels
|
|
are valid. If alpha=0 , the ROIs cover the whole images. Otherwise, they are likely to be smaller
|
|
(see the picture below).
|
|
are valid. If alpha=0 , the ROIs cover the whole images. Otherwise, they are likely to be smaller
|
|
(see the picture below).
|
|
|
|
The function computes the rotation matrices for each camera that (virtually) make both camera image
|
|
planes the same plane. Consequently, this makes all the epipolar lines parallel and thus simplifies
|
|
the dense stereo correspondence problem. The function takes the matrices computed by #stereoCalibrate
|
|
as input. As output, it provides two rotation matrices and also two projection matrices in the new
|
|
coordinates. The function distinguishes the following two cases:
|
|
|
|
<ul>
|
|
<li>
|
|
<b>Horizontal stereo</b>: the first and the second camera views are shifted relative to each other
|
|
mainly along the x-axis (with possible small vertical shift). In the rectified images, the
|
|
corresponding epipolar lines in the left and right cameras are horizontal and have the same
|
|
y-coordinate. P1 and P2 look like:
|
|
</li>
|
|
</ul>
|
|
|
|
\(\texttt{P1} = \begin{bmatrix}
|
|
f & 0 & cx_1 & 0 \\
|
|
0 & f & cy & 0 \\
|
|
0 & 0 & 1 & 0
|
|
\end{bmatrix}\)
|
|
|
|
\(\texttt{P2} = \begin{bmatrix}
|
|
f & 0 & cx_2 & T_x \cdot f \\
|
|
0 & f & cy & 0 \\
|
|
0 & 0 & 1 & 0
|
|
\end{bmatrix} ,\)
|
|
|
|
\(\texttt{Q} = \begin{bmatrix}
|
|
1 & 0 & 0 & -cx_1 \\
|
|
0 & 1 & 0 & -cy \\
|
|
0 & 0 & 0 & f \\
|
|
0 & 0 & -\frac{1}{T_x} & \frac{cx_1 - cx_2}{T_x}
|
|
\end{bmatrix} \)
|
|
|
|
where \(T_x\) is a horizontal shift between the cameras and \(cx_1=cx_2\) if
|
|
REF: CALIB_ZERO_DISPARITY is set.
|
|
|
|
<ul>
|
|
<li>
|
|
<b>Vertical stereo</b>: the first and the second camera views are shifted relative to each other
|
|
mainly in the vertical direction (and probably a bit in the horizontal direction too). The epipolar
|
|
lines in the rectified images are vertical and have the same x-coordinate. P1 and P2 look like:
|
|
</li>
|
|
</ul>
|
|
|
|
\(\texttt{P1} = \begin{bmatrix}
|
|
f & 0 & cx & 0 \\
|
|
0 & f & cy_1 & 0 \\
|
|
0 & 0 & 1 & 0
|
|
\end{bmatrix}\)
|
|
|
|
\(\texttt{P2} = \begin{bmatrix}
|
|
f & 0 & cx & 0 \\
|
|
0 & f & cy_2 & T_y \cdot f \\
|
|
0 & 0 & 1 & 0
|
|
\end{bmatrix},\)
|
|
|
|
\(\texttt{Q} = \begin{bmatrix}
|
|
1 & 0 & 0 & -cx \\
|
|
0 & 1 & 0 & -cy_1 \\
|
|
0 & 0 & 0 & f \\
|
|
0 & 0 & -\frac{1}{T_y} & \frac{cy_1 - cy_2}{T_y}
|
|
\end{bmatrix} \)
|
|
|
|
where \(T_y\) is a vertical shift between the cameras and \(cy_1=cy_2\) if
|
|
REF: CALIB_ZERO_DISPARITY is set.
|
|
|
|
As you can see, the first three columns of P1 and P2 will effectively be the new "rectified" camera
|
|
matrices. The matrices, together with R1 and R2 , can then be passed to #initUndistortRectifyMap to
|
|
initialize the rectification map for each camera.
|
|
|
|
See below the screenshot from the stereo_calib.cpp sample. Some red horizontal lines pass through
|
|
the corresponding image regions. This means that the images are well rectified, which is what most
|
|
stereo correspondence algorithms rely on. The green rectangles are roi1 and roi2 . You see that
|
|
their interiors are all valid pixels.
|
|
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="stereoRectify(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,double,org.opencv.core.Size)">
|
|
<h3>stereoRectify</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">stereoRectify</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs2,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> T,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> Q,
|
|
int flags,
|
|
double alpha,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> newImageSize)</span></div>
|
|
<div class="block">Computes rectification transforms for each head of a calibrated stereo camera.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>cameraMatrix1</code> - First camera intrinsic matrix.</dd>
|
|
<dd><code>distCoeffs1</code> - First camera distortion parameters.</dd>
|
|
<dd><code>cameraMatrix2</code> - Second camera intrinsic matrix.</dd>
|
|
<dd><code>distCoeffs2</code> - Second camera distortion parameters.</dd>
|
|
<dd><code>imageSize</code> - Size of the image used for stereo calibration.</dd>
|
|
<dd><code>R</code> - Rotation matrix from the coordinate system of the first camera to the second camera,
|
|
see REF: stereoCalibrate.</dd>
|
|
<dd><code>T</code> - Translation vector from the coordinate system of the first camera to the second camera,
|
|
see REF: stereoCalibrate.</dd>
|
|
<dd><code>R1</code> - Output 3x3 rectification transform (rotation matrix) for the first camera. This matrix
|
|
brings points given in the unrectified first camera's coordinate system to points in the rectified
|
|
first camera's coordinate system. In more technical terms, it performs a change of basis from the
|
|
unrectified first camera's coordinate system to the rectified first camera's coordinate system.</dd>
|
|
<dd><code>R2</code> - Output 3x3 rectification transform (rotation matrix) for the second camera. This matrix
|
|
brings points given in the unrectified second camera's coordinate system to points in the rectified
|
|
second camera's coordinate system. In more technical terms, it performs a change of basis from the
|
|
unrectified second camera's coordinate system to the rectified second camera's coordinate system.</dd>
|
|
<dd><code>P1</code> - Output 3x4 projection matrix in the new (rectified) coordinate systems for the first
|
|
camera, i.e. it projects points given in the rectified first camera coordinate system into the
|
|
rectified first camera's image.</dd>
|
|
<dd><code>P2</code> - Output 3x4 projection matrix in the new (rectified) coordinate systems for the second
|
|
camera, i.e. it projects points given in the rectified first camera coordinate system into the
|
|
rectified second camera's image.</dd>
|
|
<dd><code>Q</code> - Output \(4 \times 4\) disparity-to-depth mapping matrix (see REF: reprojectImageTo3D).</dd>
|
|
<dd><code>flags</code> - Operation flags that may be zero or REF: CALIB_ZERO_DISPARITY . If the flag is set,
|
|
the function makes the principal points of each camera have the same pixel coordinates in the
|
|
rectified views. And if the flag is not set, the function may still shift the images in the
|
|
horizontal or vertical direction (depending on the orientation of epipolar lines) to maximize the
|
|
useful image area.</dd>
|
|
<dd><code>alpha</code> - Free scaling parameter. If it is -1 or absent, the function performs the default
|
|
scaling. Otherwise, the parameter should be between 0 and 1. alpha=0 means that the rectified
|
|
images are zoomed and shifted so that only valid pixels are visible (no black areas after
|
|
rectification). alpha=1 means that the rectified image is decimated and shifted so that all the
|
|
pixels from the original images from the cameras are retained in the rectified images (no source
|
|
image pixels are lost). Any intermediate value yields an intermediate result between
|
|
those two extreme cases.</dd>
|
|
<dd><code>newImageSize</code> - New image resolution after rectification. The same size should be passed to
|
|
#initUndistortRectifyMap (see the stereo_calib.cpp sample in OpenCV samples directory). When (0,0)
|
|
is passed (default), it is set to the original imageSize . Setting it to a larger value can help you
|
|
preserve details in the original image, especially when there is a big radial distortion.
|
|
are valid. If alpha=0 , the ROIs cover the whole images. Otherwise, they are likely to be smaller
|
|
(see the picture below).
|
|
are valid. If alpha=0 , the ROIs cover the whole images. Otherwise, they are likely to be smaller
|
|
(see the picture below).
|
|
|
|
The function computes the rotation matrices for each camera that (virtually) make both camera image
|
|
planes the same plane. Consequently, this makes all the epipolar lines parallel and thus simplifies
|
|
the dense stereo correspondence problem. The function takes the matrices computed by #stereoCalibrate
|
|
as input. As output, it provides two rotation matrices and also two projection matrices in the new
|
|
coordinates. The function distinguishes the following two cases:
|
|
|
|
<ul>
|
|
<li>
|
|
<b>Horizontal stereo</b>: the first and the second camera views are shifted relative to each other
|
|
mainly along the x-axis (with possible small vertical shift). In the rectified images, the
|
|
corresponding epipolar lines in the left and right cameras are horizontal and have the same
|
|
y-coordinate. P1 and P2 look like:
|
|
</li>
|
|
</ul>
|
|
|
|
\(\texttt{P1} = \begin{bmatrix}
|
|
f & 0 & cx_1 & 0 \\
|
|
0 & f & cy & 0 \\
|
|
0 & 0 & 1 & 0
|
|
\end{bmatrix}\)
|
|
|
|
\(\texttt{P2} = \begin{bmatrix}
|
|
f & 0 & cx_2 & T_x \cdot f \\
|
|
0 & f & cy & 0 \\
|
|
0 & 0 & 1 & 0
|
|
\end{bmatrix} ,\)
|
|
|
|
\(\texttt{Q} = \begin{bmatrix}
|
|
1 & 0 & 0 & -cx_1 \\
|
|
0 & 1 & 0 & -cy \\
|
|
0 & 0 & 0 & f \\
|
|
0 & 0 & -\frac{1}{T_x} & \frac{cx_1 - cx_2}{T_x}
|
|
\end{bmatrix} \)
|
|
|
|
where \(T_x\) is a horizontal shift between the cameras and \(cx_1=cx_2\) if
|
|
REF: CALIB_ZERO_DISPARITY is set.
|
|
|
|
<ul>
|
|
<li>
|
|
<b>Vertical stereo</b>: the first and the second camera views are shifted relative to each other
|
|
mainly in the vertical direction (and probably a bit in the horizontal direction too). The epipolar
|
|
lines in the rectified images are vertical and have the same x-coordinate. P1 and P2 look like:
|
|
</li>
|
|
</ul>
|
|
|
|
\(\texttt{P1} = \begin{bmatrix}
|
|
f & 0 & cx & 0 \\
|
|
0 & f & cy_1 & 0 \\
|
|
0 & 0 & 1 & 0
|
|
\end{bmatrix}\)
|
|
|
|
\(\texttt{P2} = \begin{bmatrix}
|
|
f & 0 & cx & 0 \\
|
|
0 & f & cy_2 & T_y \cdot f \\
|
|
0 & 0 & 1 & 0
|
|
\end{bmatrix},\)
|
|
|
|
\(\texttt{Q} = \begin{bmatrix}
|
|
1 & 0 & 0 & -cx \\
|
|
0 & 1 & 0 & -cy_1 \\
|
|
0 & 0 & 0 & f \\
|
|
0 & 0 & -\frac{1}{T_y} & \frac{cy_1 - cy_2}{T_y}
|
|
\end{bmatrix} \)
|
|
|
|
where \(T_y\) is a vertical shift between the cameras and \(cy_1=cy_2\) if
|
|
REF: CALIB_ZERO_DISPARITY is set.
|
|
|
|
As you can see, the first three columns of P1 and P2 will effectively be the new "rectified" camera
|
|
matrices. The matrices, together with R1 and R2 , can then be passed to #initUndistortRectifyMap to
|
|
initialize the rectification map for each camera.
|
|
|
|
See below the screenshot from the stereo_calib.cpp sample. Some red horizontal lines pass through
|
|
the corresponding image regions. This means that the images are well rectified, which is what most
|
|
stereo correspondence algorithms rely on. The green rectangles are roi1 and roi2 . You see that
|
|
their interiors are all valid pixels.
|
|
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="stereoRectify(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,double)">
|
|
<h3>stereoRectify</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">stereoRectify</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs2,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> T,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> Q,
|
|
int flags,
|
|
double alpha)</span></div>
|
|
<div class="block">Computes rectification transforms for each head of a calibrated stereo camera.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>cameraMatrix1</code> - First camera intrinsic matrix.</dd>
|
|
<dd><code>distCoeffs1</code> - First camera distortion parameters.</dd>
|
|
<dd><code>cameraMatrix2</code> - Second camera intrinsic matrix.</dd>
|
|
<dd><code>distCoeffs2</code> - Second camera distortion parameters.</dd>
|
|
<dd><code>imageSize</code> - Size of the image used for stereo calibration.</dd>
|
|
<dd><code>R</code> - Rotation matrix from the coordinate system of the first camera to the second camera,
|
|
see REF: stereoCalibrate.</dd>
|
|
<dd><code>T</code> - Translation vector from the coordinate system of the first camera to the second camera,
|
|
see REF: stereoCalibrate.</dd>
|
|
<dd><code>R1</code> - Output 3x3 rectification transform (rotation matrix) for the first camera. This matrix
|
|
brings points given in the unrectified first camera's coordinate system to points in the rectified
|
|
first camera's coordinate system. In more technical terms, it performs a change of basis from the
|
|
unrectified first camera's coordinate system to the rectified first camera's coordinate system.</dd>
|
|
<dd><code>R2</code> - Output 3x3 rectification transform (rotation matrix) for the second camera. This matrix
|
|
brings points given in the unrectified second camera's coordinate system to points in the rectified
|
|
second camera's coordinate system. In more technical terms, it performs a change of basis from the
|
|
unrectified second camera's coordinate system to the rectified second camera's coordinate system.</dd>
|
|
<dd><code>P1</code> - Output 3x4 projection matrix in the new (rectified) coordinate systems for the first
|
|
camera, i.e. it projects points given in the rectified first camera coordinate system into the
|
|
rectified first camera's image.</dd>
|
|
<dd><code>P2</code> - Output 3x4 projection matrix in the new (rectified) coordinate systems for the second
|
|
camera, i.e. it projects points given in the rectified first camera coordinate system into the
|
|
rectified second camera's image.</dd>
|
|
<dd><code>Q</code> - Output \(4 \times 4\) disparity-to-depth mapping matrix (see REF: reprojectImageTo3D).</dd>
|
|
<dd><code>flags</code> - Operation flags that may be zero or REF: CALIB_ZERO_DISPARITY . If the flag is set,
|
|
the function makes the principal points of each camera have the same pixel coordinates in the
|
|
rectified views. And if the flag is not set, the function may still shift the images in the
|
|
horizontal or vertical direction (depending on the orientation of epipolar lines) to maximize the
|
|
useful image area.</dd>
|
|
<dd><code>alpha</code> - Free scaling parameter. If it is -1 or absent, the function performs the default
|
|
scaling. Otherwise, the parameter should be between 0 and 1. alpha=0 means that the rectified
|
|
images are zoomed and shifted so that only valid pixels are visible (no black areas after
|
|
rectification). alpha=1 means that the rectified image is decimated and shifted so that all the
|
|
pixels from the original images from the cameras are retained in the rectified images (no source
|
|
image pixels are lost). Any intermediate value yields an intermediate result between
|
|
those two extreme cases.
|
|
#initUndistortRectifyMap (see the stereo_calib.cpp sample in OpenCV samples directory). When (0,0)
|
|
is passed (default), it is set to the original imageSize . Setting it to a larger value can help you
|
|
preserve details in the original image, especially when there is a big radial distortion.
|
|
are valid. If alpha=0 , the ROIs cover the whole images. Otherwise, they are likely to be smaller
|
|
(see the picture below).
|
|
are valid. If alpha=0 , the ROIs cover the whole images. Otherwise, they are likely to be smaller
|
|
(see the picture below).
|
|
|
|
The function computes the rotation matrices for each camera that (virtually) make both camera image
|
|
planes the same plane. Consequently, this makes all the epipolar lines parallel and thus simplifies
|
|
the dense stereo correspondence problem. The function takes the matrices computed by #stereoCalibrate
|
|
as input. As output, it provides two rotation matrices and also two projection matrices in the new
|
|
coordinates. The function distinguishes the following two cases:
|
|
|
|
<ul>
|
|
<li>
|
|
<b>Horizontal stereo</b>: the first and the second camera views are shifted relative to each other
|
|
mainly along the x-axis (with possible small vertical shift). In the rectified images, the
|
|
corresponding epipolar lines in the left and right cameras are horizontal and have the same
|
|
y-coordinate. P1 and P2 look like:
|
|
</li>
|
|
</ul>
|
|
|
|
\(\texttt{P1} = \begin{bmatrix}
|
|
f & 0 & cx_1 & 0 \\
|
|
0 & f & cy & 0 \\
|
|
0 & 0 & 1 & 0
|
|
\end{bmatrix}\)
|
|
|
|
\(\texttt{P2} = \begin{bmatrix}
|
|
f & 0 & cx_2 & T_x \cdot f \\
|
|
0 & f & cy & 0 \\
|
|
0 & 0 & 1 & 0
|
|
\end{bmatrix} ,\)
|
|
|
|
\(\texttt{Q} = \begin{bmatrix}
|
|
1 & 0 & 0 & -cx_1 \\
|
|
0 & 1 & 0 & -cy \\
|
|
0 & 0 & 0 & f \\
|
|
0 & 0 & -\frac{1}{T_x} & \frac{cx_1 - cx_2}{T_x}
|
|
\end{bmatrix} \)
|
|
|
|
where \(T_x\) is a horizontal shift between the cameras and \(cx_1=cx_2\) if
|
|
REF: CALIB_ZERO_DISPARITY is set.
|
|
|
|
<ul>
|
|
<li>
|
|
<b>Vertical stereo</b>: the first and the second camera views are shifted relative to each other
|
|
mainly in the vertical direction (and probably a bit in the horizontal direction too). The epipolar
|
|
lines in the rectified images are vertical and have the same x-coordinate. P1 and P2 look like:
|
|
</li>
|
|
</ul>
|
|
|
|
\(\texttt{P1} = \begin{bmatrix}
|
|
f & 0 & cx & 0 \\
|
|
0 & f & cy_1 & 0 \\
|
|
0 & 0 & 1 & 0
|
|
\end{bmatrix}\)
|
|
|
|
\(\texttt{P2} = \begin{bmatrix}
|
|
f & 0 & cx & 0 \\
|
|
0 & f & cy_2 & T_y \cdot f \\
|
|
0 & 0 & 1 & 0
|
|
\end{bmatrix},\)
|
|
|
|
\(\texttt{Q} = \begin{bmatrix}
|
|
1 & 0 & 0 & -cx \\
|
|
0 & 1 & 0 & -cy_1 \\
|
|
0 & 0 & 0 & f \\
|
|
0 & 0 & -\frac{1}{T_y} & \frac{cy_1 - cy_2}{T_y}
|
|
\end{bmatrix} \)
|
|
|
|
where \(T_y\) is a vertical shift between the cameras and \(cy_1=cy_2\) if
|
|
REF: CALIB_ZERO_DISPARITY is set.
|
|
|
|
As you can see, the first three columns of P1 and P2 will effectively be the new "rectified" camera
|
|
matrices. The matrices, together with R1 and R2 , can then be passed to #initUndistortRectifyMap to
|
|
initialize the rectification map for each camera.
|
|
|
|
See below the screenshot from the stereo_calib.cpp sample. Some red horizontal lines pass through
|
|
the corresponding image regions. This means that the images are well rectified, which is what most
|
|
stereo correspondence algorithms rely on. The green rectangles are roi1 and roi2 . You see that
|
|
their interiors are all valid pixels.
|
|
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="stereoRectify(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int)">
|
|
<h3>stereoRectify</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">stereoRectify</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs2,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> T,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> Q,
|
|
int flags)</span></div>
|
|
<div class="block">Computes rectification transforms for each head of a calibrated stereo camera.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>cameraMatrix1</code> - First camera intrinsic matrix.</dd>
|
|
<dd><code>distCoeffs1</code> - First camera distortion parameters.</dd>
|
|
<dd><code>cameraMatrix2</code> - Second camera intrinsic matrix.</dd>
|
|
<dd><code>distCoeffs2</code> - Second camera distortion parameters.</dd>
|
|
<dd><code>imageSize</code> - Size of the image used for stereo calibration.</dd>
|
|
<dd><code>R</code> - Rotation matrix from the coordinate system of the first camera to the second camera,
|
|
see REF: stereoCalibrate.</dd>
|
|
<dd><code>T</code> - Translation vector from the coordinate system of the first camera to the second camera,
|
|
see REF: stereoCalibrate.</dd>
|
|
<dd><code>R1</code> - Output 3x3 rectification transform (rotation matrix) for the first camera. This matrix
|
|
brings points given in the unrectified first camera's coordinate system to points in the rectified
|
|
first camera's coordinate system. In more technical terms, it performs a change of basis from the
|
|
unrectified first camera's coordinate system to the rectified first camera's coordinate system.</dd>
|
|
<dd><code>R2</code> - Output 3x3 rectification transform (rotation matrix) for the second camera. This matrix
|
|
brings points given in the unrectified second camera's coordinate system to points in the rectified
|
|
second camera's coordinate system. In more technical terms, it performs a change of basis from the
|
|
unrectified second camera's coordinate system to the rectified second camera's coordinate system.</dd>
|
|
<dd><code>P1</code> - Output 3x4 projection matrix in the new (rectified) coordinate systems for the first
|
|
camera, i.e. it projects points given in the rectified first camera coordinate system into the
|
|
rectified first camera's image.</dd>
|
|
<dd><code>P2</code> - Output 3x4 projection matrix in the new (rectified) coordinate systems for the second
|
|
camera, i.e. it projects points given in the rectified first camera coordinate system into the
|
|
rectified second camera's image.</dd>
|
|
<dd><code>Q</code> - Output \(4 \times 4\) disparity-to-depth mapping matrix (see REF: reprojectImageTo3D).</dd>
|
|
<dd><code>flags</code> - Operation flags that may be zero or REF: CALIB_ZERO_DISPARITY . If the flag is set,
|
|
the function makes the principal points of each camera have the same pixel coordinates in the
|
|
rectified views. And if the flag is not set, the function may still shift the images in the
|
|
horizontal or vertical direction (depending on the orientation of epipolar lines) to maximize the
|
|
useful image area.
|
|
scaling. Otherwise, the parameter should be between 0 and 1. alpha=0 means that the rectified
|
|
images are zoomed and shifted so that only valid pixels are visible (no black areas after
|
|
rectification). alpha=1 means that the rectified image is decimated and shifted so that all the
|
|
pixels from the original images from the cameras are retained in the rectified images (no source
|
|
image pixels are lost). Any intermediate value yields an intermediate result between
|
|
those two extreme cases.
|
|
#initUndistortRectifyMap (see the stereo_calib.cpp sample in OpenCV samples directory). When (0,0)
|
|
is passed (default), it is set to the original imageSize . Setting it to a larger value can help you
|
|
preserve details in the original image, especially when there is a big radial distortion.
|
|
are valid. If alpha=0 , the ROIs cover the whole images. Otherwise, they are likely to be smaller
|
|
(see the picture below).
|
|
are valid. If alpha=0 , the ROIs cover the whole images. Otherwise, they are likely to be smaller
|
|
(see the picture below).
|
|
|
|
The function computes the rotation matrices for each camera that (virtually) make both camera image
|
|
planes the same plane. Consequently, this makes all the epipolar lines parallel and thus simplifies
|
|
the dense stereo correspondence problem. The function takes the matrices computed by #stereoCalibrate
|
|
as input. As output, it provides two rotation matrices and also two projection matrices in the new
|
|
coordinates. The function distinguishes the following two cases:
|
|
|
|
<ul>
|
|
<li>
|
|
<b>Horizontal stereo</b>: the first and the second camera views are shifted relative to each other
|
|
mainly along the x-axis (with possible small vertical shift). In the rectified images, the
|
|
corresponding epipolar lines in the left and right cameras are horizontal and have the same
|
|
y-coordinate. P1 and P2 look like:
|
|
</li>
|
|
</ul>
|
|
|
|
\(\texttt{P1} = \begin{bmatrix}
|
|
f & 0 & cx_1 & 0 \\
|
|
0 & f & cy & 0 \\
|
|
0 & 0 & 1 & 0
|
|
\end{bmatrix}\)
|
|
|
|
\(\texttt{P2} = \begin{bmatrix}
|
|
f & 0 & cx_2 & T_x \cdot f \\
|
|
0 & f & cy & 0 \\
|
|
0 & 0 & 1 & 0
|
|
\end{bmatrix} ,\)
|
|
|
|
\(\texttt{Q} = \begin{bmatrix}
|
|
1 & 0 & 0 & -cx_1 \\
|
|
0 & 1 & 0 & -cy \\
|
|
0 & 0 & 0 & f \\
|
|
0 & 0 & -\frac{1}{T_x} & \frac{cx_1 - cx_2}{T_x}
|
|
\end{bmatrix} \)
|
|
|
|
where \(T_x\) is a horizontal shift between the cameras and \(cx_1=cx_2\) if
|
|
REF: CALIB_ZERO_DISPARITY is set.
|
|
|
|
<ul>
|
|
<li>
|
|
<b>Vertical stereo</b>: the first and the second camera views are shifted relative to each other
|
|
mainly in the vertical direction (and probably a bit in the horizontal direction too). The epipolar
|
|
lines in the rectified images are vertical and have the same x-coordinate. P1 and P2 look like:
|
|
</li>
|
|
</ul>
|
|
|
|
\(\texttt{P1} = \begin{bmatrix}
|
|
f & 0 & cx & 0 \\
|
|
0 & f & cy_1 & 0 \\
|
|
0 & 0 & 1 & 0
|
|
\end{bmatrix}\)
|
|
|
|
\(\texttt{P2} = \begin{bmatrix}
|
|
f & 0 & cx & 0 \\
|
|
0 & f & cy_2 & T_y \cdot f \\
|
|
0 & 0 & 1 & 0
|
|
\end{bmatrix},\)
|
|
|
|
\(\texttt{Q} = \begin{bmatrix}
|
|
1 & 0 & 0 & -cx \\
|
|
0 & 1 & 0 & -cy_1 \\
|
|
0 & 0 & 0 & f \\
|
|
0 & 0 & -\frac{1}{T_y} & \frac{cy_1 - cy_2}{T_y}
|
|
\end{bmatrix} \)
|
|
|
|
where \(T_y\) is a vertical shift between the cameras and \(cy_1=cy_2\) if
|
|
REF: CALIB_ZERO_DISPARITY is set.
|
|
|
|
As you can see, the first three columns of P1 and P2 will effectively be the new "rectified" camera
|
|
matrices. The matrices, together with R1 and R2 , can then be passed to #initUndistortRectifyMap to
|
|
initialize the rectification map for each camera.
|
|
|
|
See below the screenshot from the stereo_calib.cpp sample. Some red horizontal lines pass through
|
|
the corresponding image regions. This means that the images are well rectified, which is what most
|
|
stereo correspondence algorithms rely on. The green rectangles are roi1 and roi2 . You see that
|
|
their interiors are all valid pixels.
|
|
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="stereoRectify(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>stereoRectify</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">stereoRectify</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs2,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> T,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> Q)</span></div>
|
|
<div class="block">Computes rectification transforms for each head of a calibrated stereo camera.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>cameraMatrix1</code> - First camera intrinsic matrix.</dd>
|
|
<dd><code>distCoeffs1</code> - First camera distortion parameters.</dd>
|
|
<dd><code>cameraMatrix2</code> - Second camera intrinsic matrix.</dd>
|
|
<dd><code>distCoeffs2</code> - Second camera distortion parameters.</dd>
|
|
<dd><code>imageSize</code> - Size of the image used for stereo calibration.</dd>
|
|
<dd><code>R</code> - Rotation matrix from the coordinate system of the first camera to the second camera,
|
|
see REF: stereoCalibrate.</dd>
|
|
<dd><code>T</code> - Translation vector from the coordinate system of the first camera to the second camera,
|
|
see REF: stereoCalibrate.</dd>
|
|
<dd><code>R1</code> - Output 3x3 rectification transform (rotation matrix) for the first camera. This matrix
|
|
brings points given in the unrectified first camera's coordinate system to points in the rectified
|
|
first camera's coordinate system. In more technical terms, it performs a change of basis from the
|
|
unrectified first camera's coordinate system to the rectified first camera's coordinate system.</dd>
|
|
<dd><code>R2</code> - Output 3x3 rectification transform (rotation matrix) for the second camera. This matrix
|
|
brings points given in the unrectified second camera's coordinate system to points in the rectified
|
|
second camera's coordinate system. In more technical terms, it performs a change of basis from the
|
|
unrectified second camera's coordinate system to the rectified second camera's coordinate system.</dd>
|
|
<dd><code>P1</code> - Output 3x4 projection matrix in the new (rectified) coordinate systems for the first
|
|
camera, i.e. it projects points given in the rectified first camera coordinate system into the
|
|
rectified first camera's image.</dd>
|
|
<dd><code>P2</code> - Output 3x4 projection matrix in the new (rectified) coordinate systems for the second
|
|
camera, i.e. it projects points given in the rectified first camera coordinate system into the
|
|
rectified second camera's image.</dd>
|
|
<dd><code>Q</code> - Output \(4 \times 4\) disparity-to-depth mapping matrix (see REF: reprojectImageTo3D).
|
|
the function makes the principal points of each camera have the same pixel coordinates in the
|
|
rectified views. And if the flag is not set, the function may still shift the images in the
|
|
horizontal or vertical direction (depending on the orientation of epipolar lines) to maximize the
|
|
useful image area.
|
|
scaling. Otherwise, the parameter should be between 0 and 1. alpha=0 means that the rectified
|
|
images are zoomed and shifted so that only valid pixels are visible (no black areas after
|
|
rectification). alpha=1 means that the rectified image is decimated and shifted so that all the
|
|
pixels from the original images from the cameras are retained in the rectified images (no source
|
|
image pixels are lost). Any intermediate value yields an intermediate result between
|
|
those two extreme cases.
|
|
#initUndistortRectifyMap (see the stereo_calib.cpp sample in OpenCV samples directory). When (0,0)
|
|
is passed (default), it is set to the original imageSize . Setting it to a larger value can help you
|
|
preserve details in the original image, especially when there is a big radial distortion.
|
|
are valid. If alpha=0 , the ROIs cover the whole images. Otherwise, they are likely to be smaller
|
|
(see the picture below).
|
|
are valid. If alpha=0 , the ROIs cover the whole images. Otherwise, they are likely to be smaller
|
|
(see the picture below).
|
|
|
|
The function computes the rotation matrices for each camera that (virtually) make both camera image
|
|
planes the same plane. Consequently, this makes all the epipolar lines parallel and thus simplifies
|
|
the dense stereo correspondence problem. The function takes the matrices computed by #stereoCalibrate
|
|
as input. As output, it provides two rotation matrices and also two projection matrices in the new
|
|
coordinates. The function distinguishes the following two cases:
|
|
|
|
<ul>
|
|
<li>
|
|
<b>Horizontal stereo</b>: the first and the second camera views are shifted relative to each other
|
|
mainly along the x-axis (with possible small vertical shift). In the rectified images, the
|
|
corresponding epipolar lines in the left and right cameras are horizontal and have the same
|
|
y-coordinate. P1 and P2 look like:
|
|
</li>
|
|
</ul>
|
|
|
|
\(\texttt{P1} = \begin{bmatrix}
|
|
f & 0 & cx_1 & 0 \\
|
|
0 & f & cy & 0 \\
|
|
0 & 0 & 1 & 0
|
|
\end{bmatrix}\)
|
|
|
|
\(\texttt{P2} = \begin{bmatrix}
|
|
f & 0 & cx_2 & T_x \cdot f \\
|
|
0 & f & cy & 0 \\
|
|
0 & 0 & 1 & 0
|
|
\end{bmatrix} ,\)
|
|
|
|
\(\texttt{Q} = \begin{bmatrix}
|
|
1 & 0 & 0 & -cx_1 \\
|
|
0 & 1 & 0 & -cy \\
|
|
0 & 0 & 0 & f \\
|
|
0 & 0 & -\frac{1}{T_x} & \frac{cx_1 - cx_2}{T_x}
|
|
\end{bmatrix} \)
|
|
|
|
where \(T_x\) is a horizontal shift between the cameras and \(cx_1=cx_2\) if
|
|
REF: CALIB_ZERO_DISPARITY is set.
|
|
|
|
<ul>
|
|
<li>
|
|
<b>Vertical stereo</b>: the first and the second camera views are shifted relative to each other
|
|
mainly in the vertical direction (and probably a bit in the horizontal direction too). The epipolar
|
|
lines in the rectified images are vertical and have the same x-coordinate. P1 and P2 look like:
|
|
</li>
|
|
</ul>
|
|
|
|
\(\texttt{P1} = \begin{bmatrix}
|
|
f & 0 & cx & 0 \\
|
|
0 & f & cy_1 & 0 \\
|
|
0 & 0 & 1 & 0
|
|
\end{bmatrix}\)
|
|
|
|
\(\texttt{P2} = \begin{bmatrix}
|
|
f & 0 & cx & 0 \\
|
|
0 & f & cy_2 & T_y \cdot f \\
|
|
0 & 0 & 1 & 0
|
|
\end{bmatrix},\)
|
|
|
|
\(\texttt{Q} = \begin{bmatrix}
|
|
1 & 0 & 0 & -cx \\
|
|
0 & 1 & 0 & -cy_1 \\
|
|
0 & 0 & 0 & f \\
|
|
0 & 0 & -\frac{1}{T_y} & \frac{cy_1 - cy_2}{T_y}
|
|
\end{bmatrix} \)
|
|
|
|
where \(T_y\) is a vertical shift between the cameras and \(cy_1=cy_2\) if
|
|
REF: CALIB_ZERO_DISPARITY is set.
|
|
|
|
As you can see, the first three columns of P1 and P2 will effectively be the new "rectified" camera
|
|
matrices. The matrices, together with R1 and R2 , can then be passed to #initUndistortRectifyMap to
|
|
initialize the rectification map for each camera.
|
|
|
|
See below the screenshot from the stereo_calib.cpp sample. Some red horizontal lines pass through
|
|
the corresponding image regions. This means that the images are well rectified, which is what most
|
|
stereo correspondence algorithms rely on. The green rectangles are roi1 and roi2 . You see that
|
|
their interiors are all valid pixels.
|
|
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="stereoRectifyUncalibrated(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,double)">
|
|
<h3>stereoRectifyUncalibrated</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">boolean</span> <span class="element-name">stereoRectifyUncalibrated</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> F,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imgSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> H1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> H2,
|
|
double threshold)</span></div>
|
|
<div class="block">Computes a rectification transform for an uncalibrated stereo camera.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>points1</code> - Array of feature points in the first image.</dd>
|
|
<dd><code>points2</code> - The corresponding points in the second image. The same formats as in
|
|
#findFundamentalMat are supported.</dd>
|
|
<dd><code>F</code> - Input fundamental matrix. It can be computed from the same set of point pairs using
|
|
#findFundamentalMat .</dd>
|
|
<dd><code>imgSize</code> - Size of the image.</dd>
|
|
<dd><code>H1</code> - Output rectification homography matrix for the first image.</dd>
|
|
<dd><code>H2</code> - Output rectification homography matrix for the second image.</dd>
|
|
<dd><code>threshold</code> - Optional threshold used to filter out the outliers. If the parameter is greater
|
|
than zero, all the point pairs that do not comply with the epipolar geometry (that is, the points
|
|
for which \(|\texttt{points2[i]}^T \cdot \texttt{F} \cdot \texttt{points1[i]}|>\texttt{threshold}\) )
|
|
are rejected prior to computing the homographies. Otherwise, all the points are considered inliers.
|
|
|
|
The function computes the rectification transformations without knowing intrinsic parameters of the
|
|
cameras and their relative position in the space, which explains the suffix "uncalibrated". Another
|
|
related difference from #stereoRectify is that the function outputs not the rectification
|
|
transformations in the object (3D) space, but the planar perspective transformations encoded by the
|
|
homography matrices H1 and H2 . The function implements the algorithm CITE: Hartley99 .
|
|
|
|
<b>Note:</b>
|
|
While the algorithm does not need to know the intrinsic parameters of the cameras, it heavily
|
|
depends on the epipolar geometry. Therefore, if the camera lenses have a significant distortion,
|
|
it would be better to correct it before computing the fundamental matrix and calling this
|
|
function. For example, distortion coefficients can be estimated for each head of stereo camera
|
|
separately by using #calibrateCamera . Then, the images can be corrected using #undistort , or
|
|
just the point coordinates can be corrected with #undistortPoints .</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="stereoRectifyUncalibrated(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>stereoRectifyUncalibrated</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">boolean</span> <span class="element-name">stereoRectifyUncalibrated</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> F,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imgSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> H1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> H2)</span></div>
|
|
<div class="block">Computes a rectification transform for an uncalibrated stereo camera.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>points1</code> - Array of feature points in the first image.</dd>
|
|
<dd><code>points2</code> - The corresponding points in the second image. The same formats as in
|
|
#findFundamentalMat are supported.</dd>
|
|
<dd><code>F</code> - Input fundamental matrix. It can be computed from the same set of point pairs using
|
|
#findFundamentalMat .</dd>
|
|
<dd><code>imgSize</code> - Size of the image.</dd>
|
|
<dd><code>H1</code> - Output rectification homography matrix for the first image.</dd>
|
|
<dd><code>H2</code> - Output rectification homography matrix for the second image.
|
|
than zero, all the point pairs that do not comply with the epipolar geometry (that is, the points
|
|
for which \(|\texttt{points2[i]}^T \cdot \texttt{F} \cdot \texttt{points1[i]}|>\texttt{threshold}\) )
|
|
are rejected prior to computing the homographies. Otherwise, all the points are considered inliers.
|
|
|
|
The function computes the rectification transformations without knowing intrinsic parameters of the
|
|
cameras and their relative position in the space, which explains the suffix "uncalibrated". Another
|
|
related difference from #stereoRectify is that the function outputs not the rectification
|
|
transformations in the object (3D) space, but the planar perspective transformations encoded by the
|
|
homography matrices H1 and H2 . The function implements the algorithm CITE: Hartley99 .
|
|
|
|
<b>Note:</b>
|
|
While the algorithm does not need to know the intrinsic parameters of the cameras, it heavily
|
|
depends on the epipolar geometry. Therefore, if the camera lenses have a significant distortion,
|
|
it would be better to correct it before computing the fundamental matrix and calling this
|
|
function. For example, distortion coefficients can be estimated for each head of stereo camera
|
|
separately by using #calibrateCamera . Then, the images can be corrected using #undistort , or
|
|
just the point coordinates can be corrected with #undistortPoints .</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="rectify3Collinear(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,double,org.opencv.core.Size,org.opencv.core.Rect,org.opencv.core.Rect,int)">
|
|
<h3>rectify3Collinear</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">float</span> <span class="element-name">rectify3Collinear</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix3,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs3,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imgpt1,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imgpt3,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R12,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> T12,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R13,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> T13,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R3,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P3,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> Q,
|
|
double alpha,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> newImgSize,
|
|
<a href="../core/Rect.html" title="class in org.opencv.core">Rect</a> roi1,
|
|
<a href="../core/Rect.html" title="class in org.opencv.core">Rect</a> roi2,
|
|
int flags)</span></div>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="getOptimalNewCameraMatrix(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,double,org.opencv.core.Size,org.opencv.core.Rect,boolean)">
|
|
<h3>getOptimalNewCameraMatrix</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">getOptimalNewCameraMatrix</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
double alpha,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> newImgSize,
|
|
<a href="../core/Rect.html" title="class in org.opencv.core">Rect</a> validPixROI,
|
|
boolean centerPrincipalPoint)</span></div>
|
|
<div class="block">Returns the new camera intrinsic matrix based on the free scaling parameter.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>cameraMatrix</code> - Input camera intrinsic matrix.</dd>
|
|
<dd><code>distCoeffs</code> - Input vector of distortion coefficients
|
|
\(\distcoeffs\). If the vector is NULL/empty, the zero distortion coefficients are
|
|
assumed.</dd>
|
|
<dd><code>imageSize</code> - Original image size.</dd>
|
|
<dd><code>alpha</code> - Free scaling parameter between 0 (when all the pixels in the undistorted image are
|
|
valid) and 1 (when all the source image pixels are retained in the undistorted image). See
|
|
#stereoRectify for details.</dd>
|
|
<dd><code>newImgSize</code> - Image size after rectification. By default, it is set to imageSize .</dd>
|
|
<dd><code>validPixROI</code> - Optional output rectangle that outlines all-good-pixels region in the
|
|
undistorted image. See roi1, roi2 description in #stereoRectify .</dd>
|
|
<dd><code>centerPrincipalPoint</code> - Optional flag that indicates whether in the new camera intrinsic matrix the
|
|
principal point should be at the image center or not. By default, the principal point is chosen to
|
|
best fit a subset of the source image (determined by alpha) to the corrected image.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>new_camera_matrix Output new camera intrinsic matrix.
|
|
|
|
The function computes and returns the optimal new camera intrinsic matrix based on the free scaling parameter.
|
|
By varying this parameter, you may retrieve only sensible pixels alpha=0 , keep all the original
|
|
image pixels if there is valuable information in the corners alpha=1 , or get something in between.
|
|
When alpha>0 , the undistorted result is likely to have some black pixels corresponding to
|
|
"virtual" pixels outside of the captured distorted image. The original camera intrinsic matrix, distortion
|
|
coefficients, the computed new camera intrinsic matrix, and newImageSize should be passed to
|
|
#initUndistortRectifyMap to produce the maps for #remap .</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="getOptimalNewCameraMatrix(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,double,org.opencv.core.Size,org.opencv.core.Rect)">
|
|
<h3>getOptimalNewCameraMatrix</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">getOptimalNewCameraMatrix</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
double alpha,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> newImgSize,
|
|
<a href="../core/Rect.html" title="class in org.opencv.core">Rect</a> validPixROI)</span></div>
|
|
<div class="block">Returns the new camera intrinsic matrix based on the free scaling parameter.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>cameraMatrix</code> - Input camera intrinsic matrix.</dd>
|
|
<dd><code>distCoeffs</code> - Input vector of distortion coefficients
|
|
\(\distcoeffs\). If the vector is NULL/empty, the zero distortion coefficients are
|
|
assumed.</dd>
|
|
<dd><code>imageSize</code> - Original image size.</dd>
|
|
<dd><code>alpha</code> - Free scaling parameter between 0 (when all the pixels in the undistorted image are
|
|
valid) and 1 (when all the source image pixels are retained in the undistorted image). See
|
|
#stereoRectify for details.</dd>
|
|
<dd><code>newImgSize</code> - Image size after rectification. By default, it is set to imageSize .</dd>
|
|
<dd><code>validPixROI</code> - Optional output rectangle that outlines all-good-pixels region in the
|
|
undistorted image. See roi1, roi2 description in #stereoRectify .
|
|
principal point should be at the image center or not. By default, the principal point is chosen to
|
|
best fit a subset of the source image (determined by alpha) to the corrected image.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>new_camera_matrix Output new camera intrinsic matrix.
|
|
|
|
The function computes and returns the optimal new camera intrinsic matrix based on the free scaling parameter.
|
|
By varying this parameter, you may retrieve only sensible pixels alpha=0 , keep all the original
|
|
image pixels if there is valuable information in the corners alpha=1 , or get something in between.
|
|
When alpha>0 , the undistorted result is likely to have some black pixels corresponding to
|
|
"virtual" pixels outside of the captured distorted image. The original camera intrinsic matrix, distortion
|
|
coefficients, the computed new camera intrinsic matrix, and newImageSize should be passed to
|
|
#initUndistortRectifyMap to produce the maps for #remap .</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="getOptimalNewCameraMatrix(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,double,org.opencv.core.Size)">
|
|
<h3>getOptimalNewCameraMatrix</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">getOptimalNewCameraMatrix</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
double alpha,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> newImgSize)</span></div>
|
|
<div class="block">Returns the new camera intrinsic matrix based on the free scaling parameter.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>cameraMatrix</code> - Input camera intrinsic matrix.</dd>
|
|
<dd><code>distCoeffs</code> - Input vector of distortion coefficients
|
|
\(\distcoeffs\). If the vector is NULL/empty, the zero distortion coefficients are
|
|
assumed.</dd>
|
|
<dd><code>imageSize</code> - Original image size.</dd>
|
|
<dd><code>alpha</code> - Free scaling parameter between 0 (when all the pixels in the undistorted image are
|
|
valid) and 1 (when all the source image pixels are retained in the undistorted image). See
|
|
#stereoRectify for details.</dd>
|
|
<dd><code>newImgSize</code> - Image size after rectification. By default, it is set to imageSize .
|
|
undistorted image. See roi1, roi2 description in #stereoRectify .
|
|
principal point should be at the image center or not. By default, the principal point is chosen to
|
|
best fit a subset of the source image (determined by alpha) to the corrected image.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>new_camera_matrix Output new camera intrinsic matrix.
|
|
|
|
The function computes and returns the optimal new camera intrinsic matrix based on the free scaling parameter.
|
|
By varying this parameter, you may retrieve only sensible pixels alpha=0 , keep all the original
|
|
image pixels if there is valuable information in the corners alpha=1 , or get something in between.
|
|
When alpha>0 , the undistorted result is likely to have some black pixels corresponding to
|
|
"virtual" pixels outside of the captured distorted image. The original camera intrinsic matrix, distortion
|
|
coefficients, the computed new camera intrinsic matrix, and newImageSize should be passed to
|
|
#initUndistortRectifyMap to produce the maps for #remap .</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="getOptimalNewCameraMatrix(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,double)">
|
|
<h3>getOptimalNewCameraMatrix</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">getOptimalNewCameraMatrix</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
double alpha)</span></div>
|
|
<div class="block">Returns the new camera intrinsic matrix based on the free scaling parameter.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>cameraMatrix</code> - Input camera intrinsic matrix.</dd>
|
|
<dd><code>distCoeffs</code> - Input vector of distortion coefficients
|
|
\(\distcoeffs\). If the vector is NULL/empty, the zero distortion coefficients are
|
|
assumed.</dd>
|
|
<dd><code>imageSize</code> - Original image size.</dd>
|
|
<dd><code>alpha</code> - Free scaling parameter between 0 (when all the pixels in the undistorted image are
|
|
valid) and 1 (when all the source image pixels are retained in the undistorted image). See
|
|
#stereoRectify for details.
|
|
undistorted image. See roi1, roi2 description in #stereoRectify .
|
|
principal point should be at the image center or not. By default, the principal point is chosen to
|
|
best fit a subset of the source image (determined by alpha) to the corrected image.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>new_camera_matrix Output new camera intrinsic matrix.
|
|
|
|
The function computes and returns the optimal new camera intrinsic matrix based on the free scaling parameter.
|
|
By varying this parameter, you may retrieve only sensible pixels alpha=0 , keep all the original
|
|
image pixels if there is valuable information in the corners alpha=1 , or get something in between.
|
|
When alpha>0 , the undistorted result is likely to have some black pixels corresponding to
|
|
"virtual" pixels outside of the captured distorted image. The original camera intrinsic matrix, distortion
|
|
coefficients, the computed new camera intrinsic matrix, and newImageSize should be passed to
|
|
#initUndistortRectifyMap to produce the maps for #remap .</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="calibrateHandEye(java.util.List,java.util.List,java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat,int)">
|
|
<h3>calibrateHandEye</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">calibrateHandEye</span><wbr><span class="parameters">(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> R_gripper2base,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> t_gripper2base,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> R_target2cam,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> t_target2cam,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R_cam2gripper,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> t_cam2gripper,
|
|
int method)</span></div>
|
|
<div class="block">Computes Hand-Eye calibration: \(_{}^{g}\textrm{T}_c\)</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>R_gripper2base</code> - Rotation part extracted from the homogeneous matrix that transforms a point
|
|
expressed in the gripper frame to the robot base frame (\(_{}^{b}\textrm{T}_g\)).
|
|
This is a vector (<code>vector&lt;Mat&gt;</code>) that contains the rotation, <code>(3x3)</code> rotation matrices or <code>(3x1)</code> rotation vectors,
|
|
for all the transformations from gripper frame to robot base frame.</dd>
|
|
<dd><code>t_gripper2base</code> - Translation part extracted from the homogeneous matrix that transforms a point
|
|
expressed in the gripper frame to the robot base frame (\(_{}^{b}\textrm{T}_g\)).
|
|
This is a vector (<code>vector&lt;Mat&gt;</code>) that contains the <code>(3x1)</code> translation vectors for all the transformations
|
|
from gripper frame to robot base frame.</dd>
|
|
<dd><code>R_target2cam</code> - Rotation part extracted from the homogeneous matrix that transforms a point
|
|
expressed in the target frame to the camera frame (\(_{}^{c}\textrm{T}_t\)).
|
|
This is a vector (<code>vector&lt;Mat&gt;</code>) that contains the rotation, <code>(3x3)</code> rotation matrices or <code>(3x1)</code> rotation vectors,
|
|
for all the transformations from calibration target frame to camera frame.</dd>
|
|
<dd><code>t_target2cam</code> - Rotation part extracted from the homogeneous matrix that transforms a point
|
|
expressed in the target frame to the camera frame (\(_{}^{c}\textrm{T}_t\)).
|
|
This is a vector (<code>vector&lt;Mat&gt;</code>) that contains the <code>(3x1)</code> translation vectors for all the transformations
|
|
from calibration target frame to camera frame.</dd>
|
|
<dd><code>R_cam2gripper</code> - Estimated <code>(3x3)</code> rotation part extracted from the homogeneous matrix that transforms a point
|
|
expressed in the camera frame to the gripper frame (\(_{}^{g}\textrm{T}_c\)).</dd>
|
|
<dd><code>t_cam2gripper</code> - Estimated <code>(3x1)</code> translation part extracted from the homogeneous matrix that transforms a point
|
|
expressed in the camera frame to the gripper frame (\(_{}^{g}\textrm{T}_c\)).</dd>
|
|
<dd><code>method</code> - One of the implemented Hand-Eye calibration method, see cv::HandEyeCalibrationMethod
|
|
|
|
The function performs the Hand-Eye calibration using various methods. One approach consists in estimating the
|
|
rotation then the translation (separable solutions) and the following methods are implemented:
|
|
<ul>
|
|
<li>
|
|
R. Tsai, R. Lenz A New Technique for Fully Autonomous and Efficient 3D Robotics Hand/EyeCalibration \cite Tsai89
|
|
</li>
|
|
<li>
|
|
F. Park, B. Martin Robot Sensor Calibration: Solving AX = XB on the Euclidean Group \cite Park94
|
|
</li>
|
|
<li>
|
|
R. Horaud, F. Dornaika Hand-Eye Calibration \cite Horaud95
|
|
</li>
|
|
</ul>
|
|
|
|
Another approach consists in estimating simultaneously the rotation and the translation (simultaneous solutions),
|
|
with the following implemented methods:
|
|
<ul>
|
|
<li>
|
|
N. Andreff, R. Horaud, B. Espiau On-line Hand-Eye Calibration \cite Andreff99
|
|
</li>
|
|
<li>
|
|
K. Daniilidis Hand-Eye Calibration Using Dual Quaternions \cite Daniilidis98
|
|
</li>
|
|
</ul>
|
|
|
|
The following picture describes the Hand-Eye calibration problem where the transformation between a camera ("eye")
|
|
mounted on a robot gripper ("hand") has to be estimated. This configuration is called eye-in-hand.
|
|
|
|
The eye-to-hand configuration consists in a static camera observing a calibration pattern mounted on the robot
|
|
end-effector. The transformation from the camera to the robot base frame can then be estimated by inputting
|
|
the suitable transformations to the function, see below.
|
|
|
|

|
|
|
|
The calibration procedure is the following:
|
|
<ul>
|
|
<li>
|
|
a static calibration pattern is used to estimate the transformation between the target frame
|
|
and the camera frame
|
|
</li>
|
|
<li>
|
|
the robot gripper is moved in order to acquire several poses
|
|
</li>
|
|
<li>
|
|
for each pose, the homogeneous transformation between the gripper frame and the robot base frame is recorded using for
|
|
instance the robot kinematics
|
|
\(
|
|
\begin{bmatrix}
|
|
X_b\\
|
|
Y_b\\
|
|
Z_b\\
|
|
1
|
|
\end{bmatrix}
|
|
=
|
|
\begin{bmatrix}
|
|
_{}^{b}\textrm{R}_g & _{}^{b}\textrm{t}_g \\
|
|
0_{1 \times 3} & 1
|
|
\end{bmatrix}
|
|
\begin{bmatrix}
|
|
X_g\\
|
|
Y_g\\
|
|
Z_g\\
|
|
1
|
|
\end{bmatrix}
|
|
\)
|
|
</li>
|
|
<li>
|
|
for each pose, the homogeneous transformation between the calibration target frame and the camera frame is recorded using
|
|
for instance a pose estimation method (PnP) from 2D-3D point correspondences
|
|
\(
|
|
\begin{bmatrix}
|
|
X_c\\
|
|
Y_c\\
|
|
Z_c\\
|
|
1
|
|
\end{bmatrix}
|
|
=
|
|
\begin{bmatrix}
|
|
_{}^{c}\textrm{R}_t & _{}^{c}\textrm{t}_t \\
|
|
0_{1 \times 3} & 1
|
|
\end{bmatrix}
|
|
\begin{bmatrix}
|
|
X_t\\
|
|
Y_t\\
|
|
Z_t\\
|
|
1
|
|
\end{bmatrix}
|
|
\)
|
|
</li>
|
|
</ul>
|
|
|
|
The Hand-Eye calibration procedure returns the following homogeneous transformation
|
|
\(
|
|
\begin{bmatrix}
|
|
X_g\\
|
|
Y_g\\
|
|
Z_g\\
|
|
1
|
|
\end{bmatrix}
|
|
=
|
|
\begin{bmatrix}
|
|
_{}^{g}\textrm{R}_c & _{}^{g}\textrm{t}_c \\
|
|
0_{1 \times 3} & 1
|
|
\end{bmatrix}
|
|
\begin{bmatrix}
|
|
X_c\\
|
|
Y_c\\
|
|
Z_c\\
|
|
1
|
|
\end{bmatrix}
|
|
\)
|
|
|
|
This problem is also known as solving the \(\mathbf{A}\mathbf{X}=\mathbf{X}\mathbf{B}\) equation:
|
|
<ul>
|
|
<li>
|
|
for an eye-in-hand configuration
|
|
\(
|
|
\begin{align*}
|
|
^{b}{\textrm{T}_g}^{(1)} \hspace{0.2em} ^{g}\textrm{T}_c \hspace{0.2em} ^{c}{\textrm{T}_t}^{(1)} &=
|
|
\hspace{0.1em} ^{b}{\textrm{T}_g}^{(2)} \hspace{0.2em} ^{g}\textrm{T}_c \hspace{0.2em} ^{c}{\textrm{T}_t}^{(2)} \\
|
|
</li>
|
|
</ul>
|
|
|
|
(^{b}{\textrm{T}_g}^{(2)})^{-1} \hspace{0.2em} ^{b}{\textrm{T}_g}^{(1)} \hspace{0.2em} ^{g}\textrm{T}_c &=
|
|
\hspace{0.1em} ^{g}\textrm{T}_c \hspace{0.2em} ^{c}{\textrm{T}_t}^{(2)} (^{c}{\textrm{T}_t}^{(1)})^{-1} \\
|
|
|
|
\textrm{A}_i \textrm{X} &= \textrm{X} \textrm{B}_i \\
|
|
\end{align*}
|
|
\)
|
|
|
|
<ul>
|
|
<li>
|
|
for an eye-to-hand configuration
|
|
\(
|
|
\begin{align*}
|
|
^{g}{\textrm{T}_b}^{(1)} \hspace{0.2em} ^{b}\textrm{T}_c \hspace{0.2em} ^{c}{\textrm{T}_t}^{(1)} &=
|
|
\hspace{0.1em} ^{g}{\textrm{T}_b}^{(2)} \hspace{0.2em} ^{b}\textrm{T}_c \hspace{0.2em} ^{c}{\textrm{T}_t}^{(2)} \\
|
|
</li>
|
|
</ul>
|
|
|
|
(^{g}{\textrm{T}_b}^{(2)})^{-1} \hspace{0.2em} ^{g}{\textrm{T}_b}^{(1)} \hspace{0.2em} ^{b}\textrm{T}_c &=
|
|
\hspace{0.1em} ^{b}\textrm{T}_c \hspace{0.2em} ^{c}{\textrm{T}_t}^{(2)} (^{c}{\textrm{T}_t}^{(1)})^{-1} \\
|
|
|
|
\textrm{A}_i \textrm{X} &= \textrm{X} \textrm{B}_i \\
|
|
\end{align*}
|
|
\)
|
|
|
|
\note
|
|
Additional information can be found on this [website](http://campar.in.tum.de/Chair/HandEyeCalibration).
|
|
\note
|
|
A minimum of 2 motions with non parallel rotation axes are necessary to determine the hand-eye transformation.
|
|
So at least 3 different poses are required, but it is strongly recommended to use many more poses.</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="calibrateHandEye(java.util.List,java.util.List,java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>calibrateHandEye</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">calibrateHandEye</span><wbr><span class="parameters">(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> R_gripper2base,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> t_gripper2base,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> R_target2cam,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> t_target2cam,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R_cam2gripper,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> t_cam2gripper)</span></div>
|
|
<div class="block">Computes Hand-Eye calibration: \(_{}^{g}\textrm{T}_c\)</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>R_gripper2base</code> - Rotation part extracted from the homogeneous matrix that transforms a point
|
|
expressed in the gripper frame to the robot base frame (\(_{}^{b}\textrm{T}_g\)).
|
|
This is a vector (<code>vector&lt;Mat&gt;</code>) that contains the rotation, <code>(3x3)</code> rotation matrices or <code>(3x1)</code> rotation vectors,
|
|
for all the transformations from gripper frame to robot base frame.</dd>
|
|
<dd><code>t_gripper2base</code> - Translation part extracted from the homogeneous matrix that transforms a point
|
|
expressed in the gripper frame to the robot base frame (\(_{}^{b}\textrm{T}_g\)).
|
|
This is a vector (<code>vector&lt;Mat&gt;</code>) that contains the <code>(3x1)</code> translation vectors for all the transformations
|
|
from gripper frame to robot base frame.</dd>
|
|
<dd><code>R_target2cam</code> - Rotation part extracted from the homogeneous matrix that transforms a point
|
|
expressed in the target frame to the camera frame (\(_{}^{c}\textrm{T}_t\)).
|
|
This is a vector (<code>vector&lt;Mat&gt;</code>) that contains the rotation, <code>(3x3)</code> rotation matrices or <code>(3x1)</code> rotation vectors,
|
|
for all the transformations from calibration target frame to camera frame.</dd>
|
|
<dd><code>t_target2cam</code> - Rotation part extracted from the homogeneous matrix that transforms a point
|
|
expressed in the target frame to the camera frame (\(_{}^{c}\textrm{T}_t\)).
|
|
This is a vector (<code>vector&lt;Mat&gt;</code>) that contains the <code>(3x1)</code> translation vectors for all the transformations
|
|
from calibration target frame to camera frame.</dd>
|
|
<dd><code>R_cam2gripper</code> - Estimated <code>(3x3)</code> rotation part extracted from the homogeneous matrix that transforms a point
|
|
expressed in the camera frame to the gripper frame (\(_{}^{g}\textrm{T}_c\)).</dd>
|
|
<dd><code>t_cam2gripper</code> - Estimated <code>(3x1)</code> translation part extracted from the homogeneous matrix that transforms a point
|
|
expressed in the camera frame to the gripper frame (\(_{}^{g}\textrm{T}_c\)).
|
|
|
|
The function performs the Hand-Eye calibration using various methods. One approach consists in estimating the
|
|
rotation then the translation (separable solutions) and the following methods are implemented:
|
|
<ul>
|
|
<li>
|
|
R. Tsai, R. Lenz A New Technique for Fully Autonomous and Efficient 3D Robotics Hand/EyeCalibration \cite Tsai89
|
|
</li>
|
|
<li>
|
|
F. Park, B. Martin Robot Sensor Calibration: Solving AX = XB on the Euclidean Group \cite Park94
|
|
</li>
|
|
<li>
|
|
R. Horaud, F. Dornaika Hand-Eye Calibration \cite Horaud95
|
|
</li>
|
|
</ul>
|
|
|
|
Another approach consists in estimating simultaneously the rotation and the translation (simultaneous solutions),
|
|
with the following implemented methods:
|
|
<ul>
|
|
<li>
|
|
N. Andreff, R. Horaud, B. Espiau On-line Hand-Eye Calibration \cite Andreff99
|
|
</li>
|
|
<li>
|
|
K. Daniilidis Hand-Eye Calibration Using Dual Quaternions \cite Daniilidis98
|
|
</li>
|
|
</ul>
|
|
|
|
The following picture describes the Hand-Eye calibration problem where the transformation between a camera ("eye")
|
|
mounted on a robot gripper ("hand") has to be estimated. This configuration is called eye-in-hand.
|
|
|
|
The eye-to-hand configuration consists in a static camera observing a calibration pattern mounted on the robot
|
|
end-effector. The transformation from the camera to the robot base frame can then be estimated by inputting
|
|
the suitable transformations to the function, see below.
|
|
|
|

|
|
|
|
The calibration procedure is the following:
|
|
<ul>
|
|
<li>
|
|
a static calibration pattern is used to estimate the transformation between the target frame
|
|
and the camera frame
|
|
</li>
|
|
<li>
|
|
the robot gripper is moved in order to acquire several poses
|
|
</li>
|
|
<li>
|
|
for each pose, the homogeneous transformation between the gripper frame and the robot base frame is recorded using for
|
|
instance the robot kinematics
|
|
\(
|
|
\begin{bmatrix}
|
|
X_b\\
|
|
Y_b\\
|
|
Z_b\\
|
|
1
|
|
\end{bmatrix}
|
|
=
|
|
\begin{bmatrix}
|
|
_{}^{b}\textrm{R}_g & _{}^{b}\textrm{t}_g \\
|
|
0_{1 \times 3} & 1
|
|
\end{bmatrix}
|
|
\begin{bmatrix}
|
|
X_g\\
|
|
Y_g\\
|
|
Z_g\\
|
|
1
|
|
\end{bmatrix}
|
|
\)
|
|
</li>
|
|
<li>
|
|
for each pose, the homogeneous transformation between the calibration target frame and the camera frame is recorded using
|
|
for instance a pose estimation method (PnP) from 2D-3D point correspondences
|
|
\(
|
|
\begin{bmatrix}
|
|
X_c\\
|
|
Y_c\\
|
|
Z_c\\
|
|
1
|
|
\end{bmatrix}
|
|
=
|
|
\begin{bmatrix}
|
|
_{}^{c}\textrm{R}_t & _{}^{c}\textrm{t}_t \\
|
|
0_{1 \times 3} & 1
|
|
\end{bmatrix}
|
|
\begin{bmatrix}
|
|
X_t\\
|
|
Y_t\\
|
|
Z_t\\
|
|
1
|
|
\end{bmatrix}
|
|
\)
|
|
</li>
|
|
</ul>
|
|
|
|
The Hand-Eye calibration procedure returns the following homogeneous transformation
|
|
\(
|
|
\begin{bmatrix}
|
|
X_g\\
|
|
Y_g\\
|
|
Z_g\\
|
|
1
|
|
\end{bmatrix}
|
|
=
|
|
\begin{bmatrix}
|
|
_{}^{g}\textrm{R}_c & _{}^{g}\textrm{t}_c \\
|
|
0_{1 \times 3} & 1
|
|
\end{bmatrix}
|
|
\begin{bmatrix}
|
|
X_c\\
|
|
Y_c\\
|
|
Z_c\\
|
|
1
|
|
\end{bmatrix}
|
|
\)
|
|
|
|
This problem is also known as solving the \(\mathbf{A}\mathbf{X}=\mathbf{X}\mathbf{B}\) equation:
|
|
<ul>
|
|
<li>
|
|
for an eye-in-hand configuration
|
|
\(
|
|
\begin{align*}
|
|
^{b}{\textrm{T}_g}^{(1)} \hspace{0.2em} ^{g}\textrm{T}_c \hspace{0.2em} ^{c}{\textrm{T}_t}^{(1)} &=
|
|
\hspace{0.1em} ^{b}{\textrm{T}_g}^{(2)} \hspace{0.2em} ^{g}\textrm{T}_c \hspace{0.2em} ^{c}{\textrm{T}_t}^{(2)} \\
|
|
</li>
|
|
</ul>
|
|
|
|
(^{b}{\textrm{T}_g}^{(2)})^{-1} \hspace{0.2em} ^{b}{\textrm{T}_g}^{(1)} \hspace{0.2em} ^{g}\textrm{T}_c &=
|
|
\hspace{0.1em} ^{g}\textrm{T}_c \hspace{0.2em} ^{c}{\textrm{T}_t}^{(2)} (^{c}{\textrm{T}_t}^{(1)})^{-1} \\
|
|
|
|
\textrm{A}_i \textrm{X} &= \textrm{X} \textrm{B}_i \\
|
|
\end{align*}
|
|
\)
|
|
|
|
<ul>
|
|
<li>
|
|
for an eye-to-hand configuration
|
|
\(
|
|
\begin{align*}
|
|
^{g}{\textrm{T}_b}^{(1)} \hspace{0.2em} ^{b}\textrm{T}_c \hspace{0.2em} ^{c}{\textrm{T}_t}^{(1)} &=
|
|
\hspace{0.1em} ^{g}{\textrm{T}_b}^{(2)} \hspace{0.2em} ^{b}\textrm{T}_c \hspace{0.2em} ^{c}{\textrm{T}_t}^{(2)} \\
|
|
</li>
|
|
</ul>
|
|
|
|
(^{g}{\textrm{T}_b}^{(2)})^{-1} \hspace{0.2em} ^{g}{\textrm{T}_b}^{(1)} \hspace{0.2em} ^{b}\textrm{T}_c &=
|
|
\hspace{0.1em} ^{b}\textrm{T}_c \hspace{0.2em} ^{c}{\textrm{T}_t}^{(2)} (^{c}{\textrm{T}_t}^{(1)})^{-1} \\
|
|
|
|
\textrm{A}_i \textrm{X} &= \textrm{X} \textrm{B}_i \\
|
|
\end{align*}
|
|
\)
|
|
|
|
\note
|
|
Additional information can be found on this [website](http://campar.in.tum.de/Chair/HandEyeCalibration).
|
|
\note
|
|
A minimum of 2 motions with non parallel rotation axes are necessary to determine the hand-eye transformation.
|
|
So at least 3 different poses are required, but it is strongly recommended to use many more poses.</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="calibrateRobotWorldHandEye(java.util.List,java.util.List,java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int)">
|
|
<h3>calibrateRobotWorldHandEye</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">calibrateRobotWorldHandEye</span><wbr><span class="parameters">(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> R_world2cam,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> t_world2cam,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> R_base2gripper,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> t_base2gripper,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R_base2world,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> t_base2world,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R_gripper2cam,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> t_gripper2cam,
|
|
int method)</span></div>
|
|
<div class="block">Computes Robot-World/Hand-Eye calibration: \(_{}^{w}\textrm{T}_b\) and \(_{}^{c}\textrm{T}_g\)</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>R_world2cam</code> - Rotation part extracted from the homogeneous matrix that transforms a point
|
|
expressed in the world frame to the camera frame (\(_{}^{c}\textrm{T}_w\)).
|
|
This is a vector (<code>vector&lt;Mat&gt;</code>) that contains the rotation, <code>(3x3)</code> rotation matrices or <code>(3x1)</code> rotation vectors,
|
|
for all the transformations from world frame to the camera frame.</dd>
|
|
<dd><code>t_world2cam</code> - Translation part extracted from the homogeneous matrix that transforms a point
|
|
expressed in the world frame to the camera frame (\(_{}^{c}\textrm{T}_w\)).
|
|
This is a vector (<code>vector&lt;Mat&gt;</code>) that contains the <code>(3x1)</code> translation vectors for all the transformations
|
|
from world frame to the camera frame.</dd>
|
|
<dd><code>R_base2gripper</code> - Rotation part extracted from the homogeneous matrix that transforms a point
|
|
expressed in the robot base frame to the gripper frame (\(_{}^{g}\textrm{T}_b\)).
|
|
This is a vector (<code>vector&lt;Mat&gt;</code>) that contains the rotation, <code>(3x3)</code> rotation matrices or <code>(3x1)</code> rotation vectors,
|
|
for all the transformations from robot base frame to the gripper frame.</dd>
|
|
<dd><code>t_base2gripper</code> - Rotation part extracted from the homogeneous matrix that transforms a point
|
|
expressed in the robot base frame to the gripper frame (\(_{}^{g}\textrm{T}_b\)).
|
|
This is a vector (<code>vector&lt;Mat&gt;</code>) that contains the <code>(3x1)</code> translation vectors for all the transformations
|
|
from robot base frame to the gripper frame.</dd>
|
|
<dd><code>R_base2world</code> - Estimated <code>(3x3)</code> rotation part extracted from the homogeneous matrix that transforms a point
|
|
expressed in the robot base frame to the world frame (\(_{}^{w}\textrm{T}_b\)).</dd>
|
|
<dd><code>t_base2world</code> - Estimated <code>(3x1)</code> translation part extracted from the homogeneous matrix that transforms a point
|
|
expressed in the robot base frame to the world frame (\(_{}^{w}\textrm{T}_b\)).</dd>
|
|
<dd><code>R_gripper2cam</code> - Estimated <code>(3x3)</code> rotation part extracted from the homogeneous matrix that transforms a point
|
|
expressed in the gripper frame to the camera frame (\(_{}^{c}\textrm{T}_g\)).</dd>
|
|
<dd><code>t_gripper2cam</code> - Estimated <code>(3x1)</code> translation part extracted from the homogeneous matrix that transforms a point
|
|
expressed in the gripper frame to the camera frame (\(_{}^{c}\textrm{T}_g\)).</dd>
|
|
<dd><code>method</code> - One of the implemented Robot-World/Hand-Eye calibration method, see cv::RobotWorldHandEyeCalibrationMethod
|
|
|
|
The function performs the Robot-World/Hand-Eye calibration using various methods. One approach consists in estimating the
|
|
rotation then the translation (separable solutions):
|
|
<ul>
|
|
<li>
|
|
M. Shah, Solving the robot-world/hand-eye calibration problem using the kronecker product \cite Shah2013SolvingTR
|
|
</li>
|
|
</ul>
|
|
|
|
Another approach consists in estimating simultaneously the rotation and the translation (simultaneous solutions),
|
|
with the following implemented method:
|
|
<ul>
|
|
<li>
|
|
A. Li, L. Wang, and D. Wu, Simultaneous robot-world and hand-eye calibration using dual-quaternions and kronecker product \cite Li2010SimultaneousRA
|
|
</li>
|
|
</ul>
|
|
|
|
The following picture describes the Robot-World/Hand-Eye calibration problem where the transformations between a robot and a world frame
|
|
and between a robot gripper ("hand") and a camera ("eye") mounted at the robot end-effector have to be estimated.
|
|
|
|

|
|
|
|
The calibration procedure is the following:
|
|
<ul>
|
|
<li>
|
|
a static calibration pattern is used to estimate the transformation between the target frame
|
|
and the camera frame
|
|
</li>
|
|
<li>
|
|
the robot gripper is moved in order to acquire several poses
|
|
</li>
|
|
<li>
|
|
for each pose, the homogeneous transformation between the gripper frame and the robot base frame is recorded using for
|
|
instance the robot kinematics
|
|
\(
|
|
\begin{bmatrix}
|
|
X_g\\
|
|
Y_g\\
|
|
Z_g\\
|
|
1
|
|
\end{bmatrix}
|
|
=
|
|
\begin{bmatrix}
|
|
_{}^{g}\textrm{R}_b & _{}^{g}\textrm{t}_b \\
|
|
0_{1 \times 3} & 1
|
|
\end{bmatrix}
|
|
\begin{bmatrix}
|
|
X_b\\
|
|
Y_b\\
|
|
Z_b\\
|
|
1
|
|
\end{bmatrix}
|
|
\)
|
|
</li>
|
|
<li>
|
|
for each pose, the homogeneous transformation between the calibration target frame (the world frame) and the camera frame is recorded using
|
|
for instance a pose estimation method (PnP) from 2D-3D point correspondences
|
|
\(
|
|
\begin{bmatrix}
|
|
X_c\\
|
|
Y_c\\
|
|
Z_c\\
|
|
1
|
|
\end{bmatrix}
|
|
=
|
|
\begin{bmatrix}
|
|
_{}^{c}\textrm{R}_w & _{}^{c}\textrm{t}_w \\
|
|
0_{1 \times 3} & 1
|
|
\end{bmatrix}
|
|
\begin{bmatrix}
|
|
X_w\\
|
|
Y_w\\
|
|
Z_w\\
|
|
1
|
|
\end{bmatrix}
|
|
\)
|
|
</li>
|
|
</ul>
|
|
|
|
The Robot-World/Hand-Eye calibration procedure returns the following homogeneous transformations
|
|
\(
|
|
\begin{bmatrix}
|
|
X_w\\
|
|
Y_w\\
|
|
Z_w\\
|
|
1
|
|
\end{bmatrix}
|
|
=
|
|
\begin{bmatrix}
|
|
_{}^{w}\textrm{R}_b & _{}^{w}\textrm{t}_b \\
|
|
0_{1 \times 3} & 1
|
|
\end{bmatrix}
|
|
\begin{bmatrix}
|
|
X_b\\
|
|
Y_b\\
|
|
Z_b\\
|
|
1
|
|
\end{bmatrix}
|
|
\)
|
|
\(
|
|
\begin{bmatrix}
|
|
X_c\\
|
|
Y_c\\
|
|
Z_c\\
|
|
1
|
|
\end{bmatrix}
|
|
=
|
|
\begin{bmatrix}
|
|
_{}^{c}\textrm{R}_g & _{}^{c}\textrm{t}_g \\
|
|
0_{1 \times 3} & 1
|
|
\end{bmatrix}
|
|
\begin{bmatrix}
|
|
X_g\\
|
|
Y_g\\
|
|
Z_g\\
|
|
1
|
|
\end{bmatrix}
|
|
\)
|
|
|
|
This problem is also known as solving the \(\mathbf{A}\mathbf{X}=\mathbf{Z}\mathbf{B}\) equation, with:
|
|
<ul>
|
|
<li>
|
|
\(\mathbf{A} \Leftrightarrow \hspace{0.1em} _{}^{c}\textrm{T}_w\)
|
|
</li>
|
|
<li>
|
|
\(\mathbf{X} \Leftrightarrow \hspace{0.1em} _{}^{w}\textrm{T}_b\)
|
|
</li>
|
|
<li>
|
|
\(\mathbf{Z} \Leftrightarrow \hspace{0.1em} _{}^{c}\textrm{T}_g\)
|
|
</li>
|
|
<li>
|
|
\(\mathbf{B} \Leftrightarrow \hspace{0.1em} _{}^{g}\textrm{T}_b\)
|
|
</li>
|
|
</ul>
|
|
|
|
\note
|
|
At least 3 measurements are required (input vectors size must be greater or equal to 3).</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="calibrateRobotWorldHandEye(java.util.List,java.util.List,java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>calibrateRobotWorldHandEye</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">calibrateRobotWorldHandEye</span><wbr><span class="parameters">(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> R_world2cam,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> t_world2cam,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> R_base2gripper,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> t_base2gripper,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R_base2world,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> t_base2world,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R_gripper2cam,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> t_gripper2cam)</span></div>
|
|
<div class="block">Computes Robot-World/Hand-Eye calibration: \(_{}^{w}\textrm{T}_b\) and \(_{}^{c}\textrm{T}_g\)</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>R_world2cam</code> - Rotation part extracted from the homogeneous matrix that transforms a point
|
|
expressed in the world frame to the camera frame (\(_{}^{c}\textrm{T}_w\)).
|
|
This is a vector (<code>vector&lt;Mat&gt;</code>) that contains the rotation, <code>(3x3)</code> rotation matrices or <code>(3x1)</code> rotation vectors,
|
|
for all the transformations from world frame to the camera frame.</dd>
|
|
<dd><code>t_world2cam</code> - Translation part extracted from the homogeneous matrix that transforms a point
|
|
expressed in the world frame to the camera frame (\(_{}^{c}\textrm{T}_w\)).
|
|
This is a vector (<code>vector&lt;Mat&gt;</code>) that contains the <code>(3x1)</code> translation vectors for all the transformations
|
|
from world frame to the camera frame.</dd>
|
|
<dd><code>R_base2gripper</code> - Rotation part extracted from the homogeneous matrix that transforms a point
|
|
expressed in the robot base frame to the gripper frame (\(_{}^{g}\textrm{T}_b\)).
|
|
This is a vector (<code>vector&lt;Mat&gt;</code>) that contains the rotation, <code>(3x3)</code> rotation matrices or <code>(3x1)</code> rotation vectors,
|
|
for all the transformations from robot base frame to the gripper frame.</dd>
|
|
<dd><code>t_base2gripper</code> - Rotation part extracted from the homogeneous matrix that transforms a point
|
|
expressed in the robot base frame to the gripper frame (\(_{}^{g}\textrm{T}_b\)).
|
|
This is a vector (<code>vector&lt;Mat&gt;</code>) that contains the <code>(3x1)</code> translation vectors for all the transformations
|
|
from robot base frame to the gripper frame.</dd>
|
|
<dd><code>R_base2world</code> - Estimated <code>(3x3)</code> rotation part extracted from the homogeneous matrix that transforms a point
|
|
expressed in the robot base frame to the world frame (\(_{}^{w}\textrm{T}_b\)).</dd>
|
|
<dd><code>t_base2world</code> - Estimated <code>(3x1)</code> translation part extracted from the homogeneous matrix that transforms a point
|
|
expressed in the robot base frame to the world frame (\(_{}^{w}\textrm{T}_b\)).</dd>
|
|
<dd><code>R_gripper2cam</code> - Estimated <code>(3x3)</code> rotation part extracted from the homogeneous matrix that transforms a point
|
|
expressed in the gripper frame to the camera frame (\(_{}^{c}\textrm{T}_g\)).</dd>
|
|
<dd><code>t_gripper2cam</code> - Estimated <code>(3x1)</code> translation part extracted from the homogeneous matrix that transforms a point
|
|
expressed in the gripper frame to the camera frame (\(_{}^{c}\textrm{T}_g\)).
|
|
|
|
The function performs the Robot-World/Hand-Eye calibration using various methods. One approach consists in estimating the
|
|
rotation then the translation (separable solutions):
|
|
<ul>
|
|
<li>
|
|
M. Shah, Solving the robot-world/hand-eye calibration problem using the kronecker product \cite Shah2013SolvingTR
|
|
</li>
|
|
</ul>
|
|
|
|
Another approach consists in estimating simultaneously the rotation and the translation (simultaneous solutions),
|
|
with the following implemented method:
|
|
<ul>
|
|
<li>
|
|
A. Li, L. Wang, and D. Wu, Simultaneous robot-world and hand-eye calibration using dual-quaternions and kronecker product \cite Li2010SimultaneousRA
|
|
</li>
|
|
</ul>
|
|
|
|
The following picture describes the Robot-World/Hand-Eye calibration problem where the transformations between a robot and a world frame
|
|
and between a robot gripper ("hand") and a camera ("eye") mounted at the robot end-effector have to be estimated.
|
|
|
|

|
|
|
|
The calibration procedure is the following:
|
|
<ul>
|
|
<li>
|
|
a static calibration pattern is used to estimate the transformation between the target frame
|
|
and the camera frame
|
|
</li>
|
|
<li>
|
|
the robot gripper is moved in order to acquire several poses
|
|
</li>
|
|
<li>
|
|
for each pose, the homogeneous transformation between the gripper frame and the robot base frame is recorded using for
|
|
instance the robot kinematics
|
|
\(
|
|
\begin{bmatrix}
|
|
X_g\\
|
|
Y_g\\
|
|
Z_g\\
|
|
1
|
|
\end{bmatrix}
|
|
=
|
|
\begin{bmatrix}
|
|
_{}^{g}\textrm{R}_b & _{}^{g}\textrm{t}_b \\
|
|
0_{1 \times 3} & 1
|
|
\end{bmatrix}
|
|
\begin{bmatrix}
|
|
X_b\\
|
|
Y_b\\
|
|
Z_b\\
|
|
1
|
|
\end{bmatrix}
|
|
\)
|
|
</li>
|
|
<li>
|
|
for each pose, the homogeneous transformation between the calibration target frame (the world frame) and the camera frame is recorded using
|
|
for instance a pose estimation method (PnP) from 2D-3D point correspondences
|
|
\(
|
|
\begin{bmatrix}
|
|
X_c\\
|
|
Y_c\\
|
|
Z_c\\
|
|
1
|
|
\end{bmatrix}
|
|
=
|
|
\begin{bmatrix}
|
|
_{}^{c}\textrm{R}_w & _{}^{c}\textrm{t}_w \\
|
|
0_{1 \times 3} & 1
|
|
\end{bmatrix}
|
|
\begin{bmatrix}
|
|
X_w\\
|
|
Y_w\\
|
|
Z_w\\
|
|
1
|
|
\end{bmatrix}
|
|
\)
|
|
</li>
|
|
</ul>
|
|
|
|
The Robot-World/Hand-Eye calibration procedure returns the following homogeneous transformations
|
|
\(
|
|
\begin{bmatrix}
|
|
X_w\\
|
|
Y_w\\
|
|
Z_w\\
|
|
1
|
|
\end{bmatrix}
|
|
=
|
|
\begin{bmatrix}
|
|
_{}^{w}\textrm{R}_b & _{}^{w}\textrm{t}_b \\
|
|
0_{1 \times 3} & 1
|
|
\end{bmatrix}
|
|
\begin{bmatrix}
|
|
X_b\\
|
|
Y_b\\
|
|
Z_b\\
|
|
1
|
|
\end{bmatrix}
|
|
\)
|
|
\(
|
|
\begin{bmatrix}
|
|
X_c\\
|
|
Y_c\\
|
|
Z_c\\
|
|
1
|
|
\end{bmatrix}
|
|
=
|
|
\begin{bmatrix}
|
|
_{}^{c}\textrm{R}_g & _{}^{c}\textrm{t}_g \\
|
|
0_{1 \times 3} & 1
|
|
\end{bmatrix}
|
|
\begin{bmatrix}
|
|
X_g\\
|
|
Y_g\\
|
|
Z_g\\
|
|
1
|
|
\end{bmatrix}
|
|
\)
|
|
|
|
This problem is also known as solving the \(\mathbf{A}\mathbf{X}=\mathbf{Z}\mathbf{B}\) equation, with:
|
|
<ul>
|
|
<li>
|
|
\(\mathbf{A} \Leftrightarrow \hspace{0.1em} _{}^{c}\textrm{T}_w\)
|
|
</li>
|
|
<li>
|
|
\(\mathbf{X} \Leftrightarrow \hspace{0.1em} _{}^{w}\textrm{T}_b\)
|
|
</li>
|
|
<li>
|
|
\(\mathbf{Z} \Leftrightarrow \hspace{0.1em} _{}^{c}\textrm{T}_g\)
|
|
</li>
|
|
<li>
|
|
\(\mathbf{B} \Leftrightarrow \hspace{0.1em} _{}^{g}\textrm{T}_b\)
|
|
</li>
|
|
</ul>
|
|
|
|
\note
|
|
At least 3 measurements are required (input vectors size must be greater or equal to 3).</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="convertPointsToHomogeneous(org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>convertPointsToHomogeneous</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">convertPointsToHomogeneous</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> src,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dst)</span></div>
|
|
<div class="block">Converts points from Euclidean to homogeneous space.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>src</code> - Input vector of N-dimensional points.</dd>
|
|
<dd><code>dst</code> - Output vector of N+1-dimensional points.
|
|
|
|
The function converts points from Euclidean to homogeneous space by appending 1's to the tuple of
|
|
point coordinates. That is, each point (x1, x2, ..., xn) is converted to (x1, x2, ..., xn, 1).</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="convertPointsFromHomogeneous(org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>convertPointsFromHomogeneous</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">convertPointsFromHomogeneous</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> src,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dst)</span></div>
|
|
<div class="block">Converts points from homogeneous to Euclidean space.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>src</code> - Input vector of N-dimensional points.</dd>
|
|
<dd><code>dst</code> - Output vector of N-1-dimensional points.
|
|
|
|
The function converts points homogeneous to Euclidean space using perspective projection. That is,
|
|
each point (x1, x2, ... x(n-1), xn) is converted to (x1/xn, x2/xn, ..., x(n-1)/xn). When xn=0, the
|
|
output point coordinates will be (0,0,0,...).</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="findFundamentalMat(org.opencv.core.MatOfPoint2f,org.opencv.core.MatOfPoint2f,int,double,double,int,org.opencv.core.Mat)">
|
|
<h3>findFundamentalMat</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">findFundamentalMat</span><wbr><span class="parameters">(<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> points1,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> points2,
|
|
int method,
|
|
double ransacReprojThreshold,
|
|
double confidence,
|
|
int maxIters,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> mask)</span></div>
|
|
<div class="block">Calculates a fundamental matrix from the corresponding points in two images.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>points1</code> - Array of N points from the first image. The point coordinates should be
|
|
floating-point (single or double precision).</dd>
|
|
<dd><code>points2</code> - Array of the second image points of the same size and format as points1 .</dd>
|
|
<dd><code>method</code> - Method for computing a fundamental matrix.
|
|
<ul>
|
|
<li>
|
|
REF: FM_7POINT for a 7-point algorithm. \(N = 7\)
|
|
</li>
|
|
<li>
|
|
REF: FM_8POINT for an 8-point algorithm. \(N \ge 8\)
|
|
</li>
|
|
<li>
|
|
REF: FM_RANSAC for the RANSAC algorithm. \(N \ge 8\)
|
|
</li>
|
|
<li>
|
|
REF: FM_LMEDS for the LMedS algorithm. \(N \ge 8\)
|
|
</li>
|
|
</ul></dd>
|
|
<dd><code>ransacReprojThreshold</code> - Parameter used only for RANSAC. It is the maximum distance from a point to an epipolar
|
|
line in pixels, beyond which the point is considered an outlier and is not used for computing the
|
|
final fundamental matrix. It can be set to something like 1-3, depending on the accuracy of the
|
|
point localization, image resolution, and the image noise.</dd>
|
|
<dd><code>confidence</code> - Parameter used for the RANSAC and LMedS methods only. It specifies a desirable level
|
|
of confidence (probability) that the estimated matrix is correct.</dd>
|
|
<dd><code>mask</code> - optional output mask</dd>
|
|
<dd><code>maxIters</code> - The maximum number of robust method iterations.
|
|
|
|
The epipolar geometry is described by the following equation:
|
|
|
|
\([p_2; 1]^T F [p_1; 1] = 0\)
|
|
|
|
where \(F\) is a fundamental matrix, \(p_1\) and \(p_2\) are corresponding points in the first and the
|
|
second images, respectively.
|
|
|
|
The function calculates the fundamental matrix using one of four methods listed above and returns
|
|
the found fundamental matrix. Normally just one matrix is found. But in case of the 7-point
|
|
algorithm, the function may return up to 3 solutions ( \(9 \times 3\) matrix that stores all 3
|
|
matrices sequentially).
|
|
|
|
The calculated fundamental matrix may be passed further to #computeCorrespondEpilines that finds the
|
|
epipolar lines corresponding to the specified points. It can also be passed to
|
|
#stereoRectifyUncalibrated to compute the rectification transformation. :
|
|
<code>
|
|
// Example. Estimation of fundamental matrix using the RANSAC algorithm
|
|
int point_count = 100;
|
|
vector<Point2f> points1(point_count);
|
|
vector<Point2f> points2(point_count);
|
|
|
|
// initialize the points here ...
|
|
for( int i = 0; i < point_count; i++ )
|
|
{
|
|
points1[i] = ...;
|
|
points2[i] = ...;
|
|
}
|
|
|
|
Mat fundamental_matrix =
|
|
findFundamentalMat(points1, points2, FM_RANSAC, 3, 0.99);
|
|
</code></dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="findFundamentalMat(org.opencv.core.MatOfPoint2f,org.opencv.core.MatOfPoint2f,int,double,double,int)">
|
|
<h3>findFundamentalMat</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">findFundamentalMat</span><wbr><span class="parameters">(<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> points1,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> points2,
|
|
int method,
|
|
double ransacReprojThreshold,
|
|
double confidence,
|
|
int maxIters)</span></div>
|
|
<div class="block">Calculates a fundamental matrix from the corresponding points in two images.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>points1</code> - Array of N points from the first image. The point coordinates should be
|
|
floating-point (single or double precision).</dd>
|
|
<dd><code>points2</code> - Array of the second image points of the same size and format as points1 .</dd>
|
|
<dd><code>method</code> - Method for computing a fundamental matrix.
|
|
<ul>
|
|
<li>
|
|
REF: FM_7POINT for a 7-point algorithm. \(N = 7\)
|
|
</li>
|
|
<li>
|
|
REF: FM_8POINT for an 8-point algorithm. \(N \ge 8\)
|
|
</li>
|
|
<li>
|
|
REF: FM_RANSAC for the RANSAC algorithm. \(N \ge 8\)
|
|
</li>
|
|
<li>
|
|
REF: FM_LMEDS for the LMedS algorithm. \(N \ge 8\)
|
|
</li>
|
|
</ul></dd>
|
|
<dd><code>ransacReprojThreshold</code> - Parameter used only for RANSAC. It is the maximum distance from a point to an epipolar
|
|
line in pixels, beyond which the point is considered an outlier and is not used for computing the
|
|
final fundamental matrix. It can be set to something like 1-3, depending on the accuracy of the
|
|
point localization, image resolution, and the image noise.</dd>
|
|
<dd><code>confidence</code> - Parameter used for the RANSAC and LMedS methods only. It specifies a desirable level
|
|
of confidence (probability) that the estimated matrix is correct.</dd>
|
|
<dd><code>maxIters</code> - The maximum number of robust method iterations.
|
|
|
|
The epipolar geometry is described by the following equation:
|
|
|
|
\([p_2; 1]^T F [p_1; 1] = 0\)
|
|
|
|
where \(F\) is a fundamental matrix, \(p_1\) and \(p_2\) are corresponding points in the first and the
|
|
second images, respectively.
|
|
|
|
The function calculates the fundamental matrix using one of four methods listed above and returns
|
|
the found fundamental matrix. Normally just one matrix is found. But in case of the 7-point
|
|
algorithm, the function may return up to 3 solutions ( \(9 \times 3\) matrix that stores all 3
|
|
matrices sequentially).
|
|
|
|
The calculated fundamental matrix may be passed further to #computeCorrespondEpilines that finds the
|
|
epipolar lines corresponding to the specified points. It can also be passed to
|
|
#stereoRectifyUncalibrated to compute the rectification transformation. :
|
|
<code>
|
|
// Example. Estimation of fundamental matrix using the RANSAC algorithm
|
|
int point_count = 100;
|
|
vector<Point2f> points1(point_count);
|
|
vector<Point2f> points2(point_count);
|
|
|
|
// initialize the points here ...
|
|
for( int i = 0; i < point_count; i++ )
|
|
{
|
|
points1[i] = ...;
|
|
points2[i] = ...;
|
|
}
|
|
|
|
Mat fundamental_matrix =
|
|
findFundamentalMat(points1, points2, FM_RANSAC, 3, 0.99);
|
|
</code></dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="findFundamentalMat(org.opencv.core.MatOfPoint2f,org.opencv.core.MatOfPoint2f,int,double,double,org.opencv.core.Mat)">
|
|
<h3>findFundamentalMat</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">findFundamentalMat</span><wbr><span class="parameters">(<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> points1,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> points2,
|
|
int method,
|
|
double ransacReprojThreshold,
|
|
double confidence,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> mask)</span></div>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="findFundamentalMat(org.opencv.core.MatOfPoint2f,org.opencv.core.MatOfPoint2f,int,double,double)">
|
|
<h3>findFundamentalMat</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">findFundamentalMat</span><wbr><span class="parameters">(<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> points1,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> points2,
|
|
int method,
|
|
double ransacReprojThreshold,
|
|
double confidence)</span></div>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="findFundamentalMat(org.opencv.core.MatOfPoint2f,org.opencv.core.MatOfPoint2f,int,double)">
|
|
<h3>findFundamentalMat</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">findFundamentalMat</span><wbr><span class="parameters">(<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> points1,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> points2,
|
|
int method,
|
|
double ransacReprojThreshold)</span></div>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="findFundamentalMat(org.opencv.core.MatOfPoint2f,org.opencv.core.MatOfPoint2f,int)">
|
|
<h3>findFundamentalMat</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">findFundamentalMat</span><wbr><span class="parameters">(<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> points1,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> points2,
|
|
int method)</span></div>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="findFundamentalMat(org.opencv.core.MatOfPoint2f,org.opencv.core.MatOfPoint2f)">
|
|
<h3>findFundamentalMat</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">findFundamentalMat</span><wbr><span class="parameters">(<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> points1,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> points2)</span></div>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="findFundamentalMat(org.opencv.core.MatOfPoint2f,org.opencv.core.MatOfPoint2f,org.opencv.core.Mat,org.opencv.calib3d.UsacParams)">
|
|
<h3>findFundamentalMat</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">findFundamentalMat</span><wbr><span class="parameters">(<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> points1,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> mask,
|
|
<a href="UsacParams.html" title="class in org.opencv.calib3d">UsacParams</a> params)</span></div>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="findEssentialMat(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,double,double,int,org.opencv.core.Mat)">
|
|
<h3>findEssentialMat</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">findEssentialMat</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
int method,
|
|
double prob,
|
|
double threshold,
|
|
int maxIters,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> mask)</span></div>
|
|
<div class="block">Calculates an essential matrix from the corresponding points in two images.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>points1</code> - Array of N (N >= 5) 2D points from the first image. The point coordinates should
|
|
be floating-point (single or double precision).</dd>
|
|
<dd><code>points2</code> - Array of the second image points of the same size and format as points1.</dd>
|
|
<dd><code>cameraMatrix</code> - Camera intrinsic matrix \(\cameramatrix{A}\) .
|
|
Note that this function assumes that points1 and points2 are feature points from cameras with the
|
|
same camera intrinsic matrix. If this assumption does not hold for your use case, use another
|
|
function overload or #undistortPoints with <code>P = cv::NoArray()</code> for both cameras to transform image
|
|
points to normalized image coordinates, which are valid for the identity camera intrinsic matrix.
|
|
When passing these coordinates, pass the identity matrix for this parameter.</dd>
|
|
<dd><code>method</code> - Method for computing an essential matrix.
|
|
<ul>
|
|
<li>
|
|
REF: RANSAC for the RANSAC algorithm.
|
|
</li>
|
|
<li>
|
|
REF: LMEDS for the LMedS algorithm.
|
|
</li>
|
|
</ul></dd>
|
|
<dd><code>prob</code> - Parameter used for the RANSAC or LMedS methods only. It specifies a desirable level of
|
|
confidence (probability) that the estimated matrix is correct.</dd>
|
|
<dd><code>threshold</code> - Parameter used for RANSAC. It is the maximum distance from a point to an epipolar
|
|
line in pixels, beyond which the point is considered an outlier and is not used for computing the
|
|
final fundamental matrix. It can be set to something like 1-3, depending on the accuracy of the
|
|
point localization, image resolution, and the image noise.</dd>
|
|
<dd><code>mask</code> - Output array of N elements, every element of which is set to 0 for outliers and to 1
|
|
for the other points. The array is computed only in the RANSAC and LMedS methods.</dd>
|
|
<dd><code>maxIters</code> - The maximum number of robust method iterations.
|
|
|
|
This function estimates essential matrix based on the five-point algorithm solver in CITE: Nister03 .
|
|
CITE: SteweniusCFS is also a related. The epipolar geometry is described by the following equation:
|
|
|
|
\([p_2; 1]^T K^{-T} E K^{-1} [p_1; 1] = 0\)
|
|
|
|
where \(E\) is an essential matrix, \(p_1\) and \(p_2\) are corresponding points in the first and the
|
|
second images, respectively. The result of this function may be passed further to
|
|
#decomposeEssentialMat or #recoverPose to recover the relative pose between cameras.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="findEssentialMat(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,double,double,int)">
|
|
<h3>findEssentialMat</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">findEssentialMat</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
int method,
|
|
double prob,
|
|
double threshold,
|
|
int maxIters)</span></div>
|
|
<div class="block">Calculates an essential matrix from the corresponding points in two images.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>points1</code> - Array of N (N >= 5) 2D points from the first image. The point coordinates should
|
|
be floating-point (single or double precision).</dd>
|
|
<dd><code>points2</code> - Array of the second image points of the same size and format as points1.</dd>
|
|
<dd><code>cameraMatrix</code> - Camera intrinsic matrix \(\cameramatrix{A}\) .
|
|
Note that this function assumes that points1 and points2 are feature points from cameras with the
|
|
same camera intrinsic matrix. If this assumption does not hold for your use case, use another
|
|
function overload or #undistortPoints with <code>P = cv::NoArray()</code> for both cameras to transform image
|
|
points to normalized image coordinates, which are valid for the identity camera intrinsic matrix.
|
|
When passing these coordinates, pass the identity matrix for this parameter.</dd>
|
|
<dd><code>method</code> - Method for computing an essential matrix.
|
|
<ul>
|
|
<li>
|
|
REF: RANSAC for the RANSAC algorithm.
|
|
</li>
|
|
<li>
|
|
REF: LMEDS for the LMedS algorithm.
|
|
</li>
|
|
</ul></dd>
|
|
<dd><code>prob</code> - Parameter used for the RANSAC or LMedS methods only. It specifies a desirable level of
|
|
confidence (probability) that the estimated matrix is correct.</dd>
|
|
<dd><code>threshold</code> - Parameter used for RANSAC. It is the maximum distance from a point to an epipolar
|
|
line in pixels, beyond which the point is considered an outlier and is not used for computing the
|
|
final fundamental matrix. It can be set to something like 1-3, depending on the accuracy of the
|
|
point localization, image resolution, and the image noise.
|
|
for the other points. The array is computed only in the RANSAC and LMedS methods.</dd>
|
|
<dd><code>maxIters</code> - The maximum number of robust method iterations.
|
|
|
|
This function estimates essential matrix based on the five-point algorithm solver in CITE: Nister03 .
|
|
CITE: SteweniusCFS is also a related. The epipolar geometry is described by the following equation:
|
|
|
|
\([p_2; 1]^T K^{-T} E K^{-1} [p_1; 1] = 0\)
|
|
|
|
where \(E\) is an essential matrix, \(p_1\) and \(p_2\) are corresponding points in the first and the
|
|
second images, respectively. The result of this function may be passed further to
|
|
#decomposeEssentialMat or #recoverPose to recover the relative pose between cameras.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="findEssentialMat(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,double,double)">
|
|
<h3>findEssentialMat</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">findEssentialMat</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
int method,
|
|
double prob,
|
|
double threshold)</span></div>
|
|
<div class="block">Calculates an essential matrix from the corresponding points in two images.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>points1</code> - Array of N (N >= 5) 2D points from the first image. The point coordinates should
|
|
be floating-point (single or double precision).</dd>
|
|
<dd><code>points2</code> - Array of the second image points of the same size and format as points1.</dd>
|
|
<dd><code>cameraMatrix</code> - Camera intrinsic matrix \(\cameramatrix{A}\) .
|
|
Note that this function assumes that points1 and points2 are feature points from cameras with the
|
|
same camera intrinsic matrix. If this assumption does not hold for your use case, use another
|
|
function overload or #undistortPoints with <code>P = cv::NoArray()</code> for both cameras to transform image
|
|
points to normalized image coordinates, which are valid for the identity camera intrinsic matrix.
|
|
When passing these coordinates, pass the identity matrix for this parameter.</dd>
|
|
<dd><code>method</code> - Method for computing an essential matrix.
|
|
<ul>
|
|
<li>
|
|
REF: RANSAC for the RANSAC algorithm.
|
|
</li>
|
|
<li>
|
|
REF: LMEDS for the LMedS algorithm.
|
|
</li>
|
|
</ul></dd>
|
|
<dd><code>prob</code> - Parameter used for the RANSAC or LMedS methods only. It specifies a desirable level of
|
|
confidence (probability) that the estimated matrix is correct.</dd>
|
|
<dd><code>threshold</code> - Parameter used for RANSAC. It is the maximum distance from a point to an epipolar
|
|
line in pixels, beyond which the point is considered an outlier and is not used for computing the
|
|
final fundamental matrix. It can be set to something like 1-3, depending on the accuracy of the
|
|
point localization, image resolution, and the image noise.
|
|
for the other points. The array is computed only in the RANSAC and LMedS methods.
|
|
|
|
This function estimates essential matrix based on the five-point algorithm solver in CITE: Nister03 .
|
|
CITE: SteweniusCFS is also a related. The epipolar geometry is described by the following equation:
|
|
|
|
\([p_2; 1]^T K^{-T} E K^{-1} [p_1; 1] = 0\)
|
|
|
|
where \(E\) is an essential matrix, \(p_1\) and \(p_2\) are corresponding points in the first and the
|
|
second images, respectively. The result of this function may be passed further to
|
|
#decomposeEssentialMat or #recoverPose to recover the relative pose between cameras.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="findEssentialMat(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,double)">
|
|
<h3>findEssentialMat</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">findEssentialMat</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
int method,
|
|
double prob)</span></div>
|
|
<div class="block">Calculates an essential matrix from the corresponding points in two images.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>points1</code> - Array of N (N >= 5) 2D points from the first image. The point coordinates should
|
|
be floating-point (single or double precision).</dd>
|
|
<dd><code>points2</code> - Array of the second image points of the same size and format as points1.</dd>
|
|
<dd><code>cameraMatrix</code> - Camera intrinsic matrix \(\cameramatrix{A}\) .
|
|
Note that this function assumes that points1 and points2 are feature points from cameras with the
|
|
same camera intrinsic matrix. If this assumption does not hold for your use case, use another
|
|
function overload or #undistortPoints with <code>P = cv::NoArray()</code> for both cameras to transform image
|
|
points to normalized image coordinates, which are valid for the identity camera intrinsic matrix.
|
|
When passing these coordinates, pass the identity matrix for this parameter.</dd>
|
|
<dd><code>method</code> - Method for computing an essential matrix.
|
|
<ul>
|
|
<li>
|
|
REF: RANSAC for the RANSAC algorithm.
|
|
</li>
|
|
<li>
|
|
REF: LMEDS for the LMedS algorithm.
|
|
</li>
|
|
</ul></dd>
|
|
<dd><code>prob</code> - Parameter used for the RANSAC or LMedS methods only. It specifies a desirable level of
|
|
confidence (probability) that the estimated matrix is correct.
|
|
line in pixels, beyond which the point is considered an outlier and is not used for computing the
|
|
final fundamental matrix. It can be set to something like 1-3, depending on the accuracy of the
|
|
point localization, image resolution, and the image noise.
|
|
for the other points. The array is computed only in the RANSAC and LMedS methods.
|
|
|
|
This function estimates essential matrix based on the five-point algorithm solver in CITE: Nister03 .
|
|
CITE: SteweniusCFS is also a related. The epipolar geometry is described by the following equation:
|
|
|
|
\([p_2; 1]^T K^{-T} E K^{-1} [p_1; 1] = 0\)
|
|
|
|
where \(E\) is an essential matrix, \(p_1\) and \(p_2\) are corresponding points in the first and the
|
|
second images, respectively. The result of this function may be passed further to
|
|
#decomposeEssentialMat or #recoverPose to recover the relative pose between cameras.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="findEssentialMat(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int)">
|
|
<h3>findEssentialMat</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">findEssentialMat</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
int method)</span></div>
|
|
<div class="block">Calculates an essential matrix from the corresponding points in two images.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>points1</code> - Array of N (N >= 5) 2D points from the first image. The point coordinates should
|
|
be floating-point (single or double precision).</dd>
|
|
<dd><code>points2</code> - Array of the second image points of the same size and format as points1.</dd>
|
|
<dd><code>cameraMatrix</code> - Camera intrinsic matrix \(\cameramatrix{A}\) .
|
|
Note that this function assumes that points1 and points2 are feature points from cameras with the
|
|
same camera intrinsic matrix. If this assumption does not hold for your use case, use another
|
|
function overload or #undistortPoints with <code>P = cv::NoArray()</code> for both cameras to transform image
|
|
points to normalized image coordinates, which are valid for the identity camera intrinsic matrix.
|
|
When passing these coordinates, pass the identity matrix for this parameter.</dd>
|
|
<dd><code>method</code> - Method for computing an essential matrix.
|
|
<ul>
|
|
<li>
|
|
REF: RANSAC for the RANSAC algorithm.
|
|
</li>
|
|
<li>
|
|
REF: LMEDS for the LMedS algorithm.
|
|
</li>
|
|
</ul>
|
|
confidence (probability) that the estimated matrix is correct.
|
|
line in pixels, beyond which the point is considered an outlier and is not used for computing the
|
|
final fundamental matrix. It can be set to something like 1-3, depending on the accuracy of the
|
|
point localization, image resolution, and the image noise.
|
|
for the other points. The array is computed only in the RANSAC and LMedS methods.
|
|
|
|
This function estimates essential matrix based on the five-point algorithm solver in CITE: Nister03 .
|
|
CITE: SteweniusCFS is also a related. The epipolar geometry is described by the following equation:
|
|
|
|
\([p_2; 1]^T K^{-T} E K^{-1} [p_1; 1] = 0\)
|
|
|
|
where \(E\) is an essential matrix, \(p_1\) and \(p_2\) are corresponding points in the first and the
|
|
second images, respectively. The result of this function may be passed further to
|
|
#decomposeEssentialMat or #recoverPose to recover the relative pose between cameras.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="findEssentialMat(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>findEssentialMat</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">findEssentialMat</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix)</span></div>
|
|
<div class="block">Calculates an essential matrix from the corresponding points in two images.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>points1</code> - Array of N (N >= 5) 2D points from the first image. The point coordinates should
|
|
be floating-point (single or double precision).</dd>
|
|
<dd><code>points2</code> - Array of the second image points of the same size and format as points1.</dd>
|
|
<dd><code>cameraMatrix</code> - Camera intrinsic matrix \(\cameramatrix{A}\) .
|
|
Note that this function assumes that points1 and points2 are feature points from cameras with the
|
|
same camera intrinsic matrix. If this assumption does not hold for your use case, use another
|
|
function overload or #undistortPoints with <code>P = cv::NoArray()</code> for both cameras to transform image
|
|
points to normalized image coordinates, which are valid for the identity camera intrinsic matrix.
|
|
When passing these coordinates, pass the identity matrix for this parameter.
|
|
<ul>
|
|
<li>
|
|
REF: RANSAC for the RANSAC algorithm.
|
|
</li>
|
|
<li>
|
|
REF: LMEDS for the LMedS algorithm.
|
|
</li>
|
|
</ul>
|
|
confidence (probability) that the estimated matrix is correct.
|
|
line in pixels, beyond which the point is considered an outlier and is not used for computing the
|
|
final fundamental matrix. It can be set to something like 1-3, depending on the accuracy of the
|
|
point localization, image resolution, and the image noise.
|
|
for the other points. The array is computed only in the RANSAC and LMedS methods.
|
|
|
|
This function estimates essential matrix based on the five-point algorithm solver in CITE: Nister03 .
|
|
CITE: SteweniusCFS is also a related. The epipolar geometry is described by the following equation:
|
|
|
|
\([p_2; 1]^T K^{-T} E K^{-1} [p_1; 1] = 0\)
|
|
|
|
where \(E\) is an essential matrix, \(p_1\) and \(p_2\) are corresponding points in the first and the
|
|
second images, respectively. The result of this function may be passed further to
|
|
#decomposeEssentialMat or #recoverPose to recover the relative pose between cameras.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="findEssentialMat(org.opencv.core.Mat,org.opencv.core.Mat,double,org.opencv.core.Point,int,double,double,int,org.opencv.core.Mat)">
|
|
<h3>findEssentialMat</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">findEssentialMat</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
double focal,
|
|
<a href="../core/Point.html" title="class in org.opencv.core">Point</a> pp,
|
|
int method,
|
|
double prob,
|
|
double threshold,
|
|
int maxIters,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> mask)</span></div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>points1</code> - Array of N (N >= 5) 2D points from the first image. The point coordinates should
|
|
be floating-point (single or double precision).</dd>
|
|
<dd><code>points2</code> - Array of the second image points of the same size and format as points1 .</dd>
|
|
<dd><code>focal</code> - focal length of the camera. Note that this function assumes that points1 and points2
|
|
are feature points from cameras with same focal length and principal point.</dd>
|
|
<dd><code>pp</code> - principal point of the camera.</dd>
|
|
<dd><code>method</code> - Method for computing a fundamental matrix.
|
|
<ul>
|
|
<li>
|
|
REF: RANSAC for the RANSAC algorithm.
|
|
</li>
|
|
<li>
|
|
REF: LMEDS for the LMedS algorithm.
|
|
</li>
|
|
</ul></dd>
|
|
<dd><code>threshold</code> - Parameter used for RANSAC. It is the maximum distance from a point to an epipolar
|
|
line in pixels, beyond which the point is considered an outlier and is not used for computing the
|
|
final fundamental matrix. It can be set to something like 1-3, depending on the accuracy of the
|
|
point localization, image resolution, and the image noise.</dd>
|
|
<dd><code>prob</code> - Parameter used for the RANSAC or LMedS methods only. It specifies a desirable level of
|
|
confidence (probability) that the estimated matrix is correct.</dd>
|
|
<dd><code>mask</code> - Output array of N elements, every element of which is set to 0 for outliers and to 1
|
|
for the other points. The array is computed only in the RANSAC and LMedS methods.</dd>
|
|
<dd><code>maxIters</code> - The maximum number of robust method iterations.
|
|
|
|
This function differs from the one above that it computes camera intrinsic matrix from focal length and
|
|
principal point:
|
|
|
|
\(A =
|
|
\begin{bmatrix}
|
|
f & 0 & x_{pp} \\
|
|
0 & f & y_{pp} \\
|
|
0 & 0 & 1
|
|
\end{bmatrix}\)</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="findEssentialMat(org.opencv.core.Mat,org.opencv.core.Mat,double,org.opencv.core.Point,int,double,double,int)">
|
|
<h3>findEssentialMat</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">findEssentialMat</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
double focal,
|
|
<a href="../core/Point.html" title="class in org.opencv.core">Point</a> pp,
|
|
int method,
|
|
double prob,
|
|
double threshold,
|
|
int maxIters)</span></div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>points1</code> - Array of N (N >= 5) 2D points from the first image. The point coordinates should
|
|
be floating-point (single or double precision).</dd>
|
|
<dd><code>points2</code> - Array of the second image points of the same size and format as points1 .</dd>
|
|
<dd><code>focal</code> - focal length of the camera. Note that this function assumes that points1 and points2
|
|
are feature points from cameras with same focal length and principal point.</dd>
|
|
<dd><code>pp</code> - principal point of the camera.</dd>
|
|
<dd><code>method</code> - Method for computing a fundamental matrix.
|
|
<ul>
|
|
<li>
|
|
REF: RANSAC for the RANSAC algorithm.
|
|
</li>
|
|
<li>
|
|
REF: LMEDS for the LMedS algorithm.
|
|
</li>
|
|
</ul></dd>
|
|
<dd><code>threshold</code> - Parameter used for RANSAC. It is the maximum distance from a point to an epipolar
|
|
line in pixels, beyond which the point is considered an outlier and is not used for computing the
|
|
final fundamental matrix. It can be set to something like 1-3, depending on the accuracy of the
|
|
point localization, image resolution, and the image noise.</dd>
|
|
<dd><code>prob</code> - Parameter used for the RANSAC or LMedS methods only. It specifies a desirable level of
|
|
confidence (probability) that the estimated matrix is correct.
|
|
for the other points. The array is computed only in the RANSAC and LMedS methods.</dd>
|
|
<dd><code>maxIters</code> - The maximum number of robust method iterations.
|
|
|
|
This function differs from the one above that it computes camera intrinsic matrix from focal length and
|
|
principal point:
|
|
|
|
\(A =
|
|
\begin{bmatrix}
|
|
f & 0 & x_{pp} \\
|
|
0 & f & y_{pp} \\
|
|
0 & 0 & 1
|
|
\end{bmatrix}\)</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="findEssentialMat(org.opencv.core.Mat,org.opencv.core.Mat,double,org.opencv.core.Point,int,double,double)">
|
|
<h3>findEssentialMat</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">findEssentialMat</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
double focal,
|
|
<a href="../core/Point.html" title="class in org.opencv.core">Point</a> pp,
|
|
int method,
|
|
double prob,
|
|
double threshold)</span></div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>points1</code> - Array of N (N >= 5) 2D points from the first image. The point coordinates should
|
|
be floating-point (single or double precision).</dd>
|
|
<dd><code>points2</code> - Array of the second image points of the same size and format as points1 .</dd>
|
|
<dd><code>focal</code> - focal length of the camera. Note that this function assumes that points1 and points2
|
|
are feature points from cameras with same focal length and principal point.</dd>
|
|
<dd><code>pp</code> - principal point of the camera.</dd>
|
|
<dd><code>method</code> - Method for computing a fundamental matrix.
|
|
<ul>
|
|
<li>
|
|
REF: RANSAC for the RANSAC algorithm.
|
|
</li>
|
|
<li>
|
|
REF: LMEDS for the LMedS algorithm.
|
|
</li>
|
|
</ul></dd>
|
|
<dd><code>threshold</code> - Parameter used for RANSAC. It is the maximum distance from a point to an epipolar
|
|
line in pixels, beyond which the point is considered an outlier and is not used for computing the
|
|
final fundamental matrix. It can be set to something like 1-3, depending on the accuracy of the
|
|
point localization, image resolution, and the image noise.</dd>
|
|
<dd><code>prob</code> - Parameter used for the RANSAC or LMedS methods only. It specifies a desirable level of
|
|
confidence (probability) that the estimated matrix is correct.
|
|
for the other points. The array is computed only in the RANSAC and LMedS methods.
|
|
|
|
This function differs from the one above that it computes camera intrinsic matrix from focal length and
|
|
principal point:
|
|
|
|
\(A =
|
|
\begin{bmatrix}
|
|
f & 0 & x_{pp} \\
|
|
0 & f & y_{pp} \\
|
|
0 & 0 & 1
|
|
\end{bmatrix}\)</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="findEssentialMat(org.opencv.core.Mat,org.opencv.core.Mat,double,org.opencv.core.Point,int,double)">
|
|
<h3>findEssentialMat</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">findEssentialMat</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
double focal,
|
|
<a href="../core/Point.html" title="class in org.opencv.core">Point</a> pp,
|
|
int method,
|
|
double prob)</span></div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>points1</code> - Array of N (N >= 5) 2D points from the first image. The point coordinates should
|
|
be floating-point (single or double precision).</dd>
|
|
<dd><code>points2</code> - Array of the second image points of the same size and format as points1 .</dd>
|
|
<dd><code>focal</code> - focal length of the camera. Note that this function assumes that points1 and points2
|
|
are feature points from cameras with same focal length and principal point.</dd>
|
|
<dd><code>pp</code> - principal point of the camera.</dd>
|
|
<dd><code>method</code> - Method for computing a fundamental matrix.
|
|
<ul>
|
|
<li>
|
|
REF: RANSAC for the RANSAC algorithm.
|
|
</li>
|
|
<li>
|
|
REF: LMEDS for the LMedS algorithm.
|
|
</li>
|
|
</ul>
|
|
line in pixels, beyond which the point is considered an outlier and is not used for computing the
|
|
final fundamental matrix. It can be set to something like 1-3, depending on the accuracy of the
|
|
point localization, image resolution, and the image noise.</dd>
|
|
<dd><code>prob</code> - Parameter used for the RANSAC or LMedS methods only. It specifies a desirable level of
|
|
confidence (probability) that the estimated matrix is correct.
|
|
for the other points. The array is computed only in the RANSAC and LMedS methods.
|
|
|
|
This function differs from the one above that it computes camera intrinsic matrix from focal length and
|
|
principal point:
|
|
|
|
\(A =
|
|
\begin{bmatrix}
|
|
f & 0 & x_{pp} \\
|
|
0 & f & y_{pp} \\
|
|
0 & 0 & 1
|
|
\end{bmatrix}\)</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="findEssentialMat(org.opencv.core.Mat,org.opencv.core.Mat,double,org.opencv.core.Point,int)">
|
|
<h3>findEssentialMat</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">findEssentialMat</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
double focal,
|
|
<a href="../core/Point.html" title="class in org.opencv.core">Point</a> pp,
|
|
int method)</span></div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>points1</code> - Array of N (N >= 5) 2D points from the first image. The point coordinates should
|
|
be floating-point (single or double precision).</dd>
|
|
<dd><code>points2</code> - Array of the second image points of the same size and format as points1 .</dd>
|
|
<dd><code>focal</code> - focal length of the camera. Note that this function assumes that points1 and points2
|
|
are feature points from cameras with same focal length and principal point.</dd>
|
|
<dd><code>pp</code> - principal point of the camera.</dd>
|
|
<dd><code>method</code> - Method for computing a fundamental matrix.
|
|
<ul>
|
|
<li>
|
|
REF: RANSAC for the RANSAC algorithm.
|
|
</li>
|
|
<li>
|
|
REF: LMEDS for the LMedS algorithm.
|
|
</li>
|
|
</ul>
|
|
line in pixels, beyond which the point is considered an outlier and is not used for computing the
|
|
final fundamental matrix. It can be set to something like 1-3, depending on the accuracy of the
|
|
point localization, image resolution, and the image noise.
|
|
confidence (probability) that the estimated matrix is correct.
|
|
for the other points. The array is computed only in the RANSAC and LMedS methods.
|
|
|
|
This function differs from the one above that it computes camera intrinsic matrix from focal length and
|
|
principal point:
|
|
|
|
\(A =
|
|
\begin{bmatrix}
|
|
f & 0 & x_{pp} \\
|
|
0 & f & y_{pp} \\
|
|
0 & 0 & 1
|
|
\end{bmatrix}\)</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="findEssentialMat(org.opencv.core.Mat,org.opencv.core.Mat,double,org.opencv.core.Point)">
|
|
<h3>findEssentialMat</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">findEssentialMat</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
double focal,
|
|
<a href="../core/Point.html" title="class in org.opencv.core">Point</a> pp)</span></div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>points1</code> - Array of N (N >= 5) 2D points from the first image. The point coordinates should
|
|
be floating-point (single or double precision).</dd>
|
|
<dd><code>points2</code> - Array of the second image points of the same size and format as points1 .</dd>
|
|
<dd><code>focal</code> - focal length of the camera. Note that this function assumes that points1 and points2
|
|
are feature points from cameras with same focal length and principal point.</dd>
|
|
<dd><code>pp</code> - principal point of the camera.
|
|
<ul>
|
|
<li>
|
|
REF: RANSAC for the RANSAC algorithm.
|
|
</li>
|
|
<li>
|
|
REF: LMEDS for the LMedS algorithm.
|
|
</li>
|
|
</ul>
|
|
line in pixels, beyond which the point is considered an outlier and is not used for computing the
|
|
final fundamental matrix. It can be set to something like 1-3, depending on the accuracy of the
|
|
point localization, image resolution, and the image noise.
|
|
confidence (probability) that the estimated matrix is correct.
|
|
for the other points. The array is computed only in the RANSAC and LMedS methods.
|
|
|
|
This function differs from the one above that it computes camera intrinsic matrix from focal length and
|
|
principal point:
|
|
|
|
\(A =
|
|
\begin{bmatrix}
|
|
f & 0 & x_{pp} \\
|
|
0 & f & y_{pp} \\
|
|
0 & 0 & 1
|
|
\end{bmatrix}\)</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="findEssentialMat(org.opencv.core.Mat,org.opencv.core.Mat,double)">
|
|
<h3>findEssentialMat</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">findEssentialMat</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
double focal)</span></div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>points1</code> - Array of N (N >= 5) 2D points from the first image. The point coordinates should
|
|
be floating-point (single or double precision).</dd>
|
|
<dd><code>points2</code> - Array of the second image points of the same size and format as points1 .</dd>
|
|
<dd><code>focal</code> - focal length of the camera. Note that this function assumes that points1 and points2
|
|
are feature points from cameras with same focal length and principal point.
|
|
<ul>
|
|
<li>
|
|
REF: RANSAC for the RANSAC algorithm.
|
|
</li>
|
|
<li>
|
|
REF: LMEDS for the LMedS algorithm.
|
|
</li>
|
|
</ul>
|
|
line in pixels, beyond which the point is considered an outlier and is not used for computing the
|
|
final fundamental matrix. It can be set to something like 1-3, depending on the accuracy of the
|
|
point localization, image resolution, and the image noise.
|
|
confidence (probability) that the estimated matrix is correct.
|
|
for the other points. The array is computed only in the RANSAC and LMedS methods.
|
|
|
|
This function differs from the one above that it computes camera intrinsic matrix from focal length and
|
|
principal point:
|
|
|
|
\(A =
|
|
\begin{bmatrix}
|
|
f & 0 & x_{pp} \\
|
|
0 & f & y_{pp} \\
|
|
0 & 0 & 1
|
|
\end{bmatrix}\)</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="findEssentialMat(org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>findEssentialMat</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">findEssentialMat</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2)</span></div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>points1</code> - Array of N (N >= 5) 2D points from the first image. The point coordinates should
|
|
be floating-point (single or double precision).</dd>
|
|
<dd><code>points2</code> - Array of the second image points of the same size and format as points1 .
|
|
are feature points from cameras with same focal length and principal point.
|
|
<ul>
|
|
<li>
|
|
REF: RANSAC for the RANSAC algorithm.
|
|
</li>
|
|
<li>
|
|
REF: LMEDS for the LMedS algorithm.
|
|
</li>
|
|
</ul>
|
|
line in pixels, beyond which the point is considered an outlier and is not used for computing the
|
|
final fundamental matrix. It can be set to something like 1-3, depending on the accuracy of the
|
|
point localization, image resolution, and the image noise.
|
|
confidence (probability) that the estimated matrix is correct.
|
|
for the other points. The array is computed only in the RANSAC and LMedS methods.
|
|
|
|
This function differs from the one above that it computes camera intrinsic matrix from focal length and
|
|
principal point:
|
|
|
|
\(A =
|
|
\begin{bmatrix}
|
|
f & 0 & x_{pp} \\
|
|
0 & f & y_{pp} \\
|
|
0 & 0 & 1
|
|
\end{bmatrix}\)</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="findEssentialMat(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,double,double,org.opencv.core.Mat)">
|
|
<h3>findEssentialMat</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">findEssentialMat</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs2,
|
|
int method,
|
|
double prob,
|
|
double threshold,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> mask)</span></div>
|
|
<div class="block">Calculates an essential matrix from the corresponding points in two images from potentially two different cameras.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>points1</code> - Array of N (N >= 5) 2D points from the first image. The point coordinates should
|
|
be floating-point (single or double precision).</dd>
|
|
<dd><code>points2</code> - Array of the second image points of the same size and format as points1.</dd>
|
|
<dd><code>cameraMatrix1</code> - Camera matrix for the first camera \(K = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\) .</dd>
|
|
<dd><code>cameraMatrix2</code> - Camera matrix for the second camera \(K = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\) .</dd>
|
|
<dd><code>distCoeffs1</code> - Input vector of distortion coefficients for the first camera
|
|
\((k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\)
|
|
of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.</dd>
|
|
<dd><code>distCoeffs2</code> - Input vector of distortion coefficients for the second camera
|
|
\((k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\)
|
|
of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.</dd>
|
|
<dd><code>method</code> - Method for computing an essential matrix.
|
|
<ul>
|
|
<li>
|
|
REF: RANSAC for the RANSAC algorithm.
|
|
</li>
|
|
<li>
|
|
REF: LMEDS for the LMedS algorithm.
|
|
</li>
|
|
</ul></dd>
|
|
<dd><code>prob</code> - Parameter used for the RANSAC or LMedS methods only. It specifies a desirable level of
|
|
confidence (probability) that the estimated matrix is correct.</dd>
|
|
<dd><code>threshold</code> - Parameter used for RANSAC. It is the maximum distance from a point to an epipolar
|
|
line in pixels, beyond which the point is considered an outlier and is not used for computing the
|
|
final fundamental matrix. It can be set to something like 1-3, depending on the accuracy of the
|
|
point localization, image resolution, and the image noise.</dd>
|
|
<dd><code>mask</code> - Output array of N elements, every element of which is set to 0 for outliers and to 1
|
|
for the other points. The array is computed only in the RANSAC and LMedS methods.
|
|
|
|
This function estimates essential matrix based on the five-point algorithm solver in CITE: Nister03 .
|
|
CITE: SteweniusCFS is also a related. The epipolar geometry is described by the following equation:
|
|
|
|
\([p_2; 1]^T K^{-T} E K^{-1} [p_1; 1] = 0\)
|
|
|
|
where \(E\) is an essential matrix, \(p_1\) and \(p_2\) are corresponding points in the first and the
|
|
second images, respectively. The result of this function may be passed further to
|
|
#decomposeEssentialMat or #recoverPose to recover the relative pose between cameras.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="findEssentialMat(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,double,double)">
|
|
<h3>findEssentialMat</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">findEssentialMat</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs2,
|
|
int method,
|
|
double prob,
|
|
double threshold)</span></div>
|
|
<div class="block">Calculates an essential matrix from the corresponding points in two images from potentially two different cameras.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>points1</code> - Array of N (N >= 5) 2D points from the first image. The point coordinates should
|
|
be floating-point (single or double precision).</dd>
|
|
<dd><code>points2</code> - Array of the second image points of the same size and format as points1.</dd>
|
|
<dd><code>cameraMatrix1</code> - Camera matrix for the first camera \(K = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\) .</dd>
|
|
<dd><code>cameraMatrix2</code> - Camera matrix for the second camera \(K = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\) .</dd>
|
|
<dd><code>distCoeffs1</code> - Input vector of distortion coefficients for the first camera
|
|
\((k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\)
|
|
of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.</dd>
|
|
<dd><code>distCoeffs2</code> - Input vector of distortion coefficients for the second camera
|
|
\((k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\)
|
|
of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.</dd>
|
|
<dd><code>method</code> - Method for computing an essential matrix.
|
|
<ul>
|
|
<li>
|
|
REF: RANSAC for the RANSAC algorithm.
|
|
</li>
|
|
<li>
|
|
REF: LMEDS for the LMedS algorithm.
|
|
</li>
|
|
</ul></dd>
|
|
<dd><code>prob</code> - Parameter used for the RANSAC or LMedS methods only. It specifies a desirable level of
|
|
confidence (probability) that the estimated matrix is correct.</dd>
|
|
<dd><code>threshold</code> - Parameter used for RANSAC. It is the maximum distance from a point to an epipolar
|
|
line in pixels, beyond which the point is considered an outlier and is not used for computing the
|
|
final fundamental matrix. It can be set to something like 1-3, depending on the accuracy of the
|
|
point localization, image resolution, and the image noise.
|
|
for the other points. The array is computed only in the RANSAC and LMedS methods.
|
|
|
|
This function estimates essential matrix based on the five-point algorithm solver in CITE: Nister03 .
|
|
CITE: SteweniusCFS is also a related. The epipolar geometry is described by the following equation:
|
|
|
|
\([p_2; 1]^T K^{-T} E K^{-1} [p_1; 1] = 0\)
|
|
|
|
where \(E\) is an essential matrix, \(p_1\) and \(p_2\) are corresponding points in the first and the
|
|
second images, respectively. The result of this function may be passed further to
|
|
#decomposeEssentialMat or #recoverPose to recover the relative pose between cameras.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="findEssentialMat(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,double)">
|
|
<h3>findEssentialMat</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">findEssentialMat</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs2,
|
|
int method,
|
|
double prob)</span></div>
|
|
<div class="block">Calculates an essential matrix from the corresponding points in two images from potentially two different cameras.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>points1</code> - Array of N (N >= 5) 2D points from the first image. The point coordinates should
|
|
be floating-point (single or double precision).</dd>
|
|
<dd><code>points2</code> - Array of the second image points of the same size and format as points1.</dd>
|
|
<dd><code>cameraMatrix1</code> - Camera matrix for the first camera \(K = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\) .</dd>
|
|
<dd><code>cameraMatrix2</code> - Camera matrix for the second camera \(K = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\) .</dd>
|
|
<dd><code>distCoeffs1</code> - Input vector of distortion coefficients for the first camera
|
|
\((k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\)
|
|
of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.</dd>
|
|
<dd><code>distCoeffs2</code> - Input vector of distortion coefficients for the second camera
|
|
\((k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\)
|
|
of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.</dd>
|
|
<dd><code>method</code> - Method for computing an essential matrix.
|
|
<ul>
|
|
<li>
|
|
REF: RANSAC for the RANSAC algorithm.
|
|
</li>
|
|
<li>
|
|
REF: LMEDS for the LMedS algorithm.
|
|
</li>
|
|
</ul></dd>
|
|
<dd><code>prob</code> - Parameter used for the RANSAC or LMedS methods only. It specifies a desirable level of
|
|
confidence (probability) that the estimated matrix is correct.
|
|
line in pixels, beyond which the point is considered an outlier and is not used for computing the
|
|
final fundamental matrix. It can be set to something like 1-3, depending on the accuracy of the
|
|
point localization, image resolution, and the image noise.
|
|
for the other points. The array is computed only in the RANSAC and LMedS methods.
|
|
|
|
This function estimates essential matrix based on the five-point algorithm solver in CITE: Nister03 .
|
|
CITE: SteweniusCFS is also a related. The epipolar geometry is described by the following equation:
|
|
|
|
\([p_2; 1]^T K^{-T} E K^{-1} [p_1; 1] = 0\)
|
|
|
|
where \(E\) is an essential matrix, \(p_1\) and \(p_2\) are corresponding points in the first and the
|
|
second images, respectively. The result of this function may be passed further to
|
|
#decomposeEssentialMat or #recoverPose to recover the relative pose between cameras.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="findEssentialMat(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int)">
|
|
<h3>findEssentialMat</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">findEssentialMat</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs2,
|
|
int method)</span></div>
|
|
<div class="block">Calculates an essential matrix from the corresponding points in two images from potentially two different cameras.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>points1</code> - Array of N (N >= 5) 2D points from the first image. The point coordinates should
|
|
be floating-point (single or double precision).</dd>
|
|
<dd><code>points2</code> - Array of the second image points of the same size and format as points1.</dd>
|
|
<dd><code>cameraMatrix1</code> - Camera matrix for the first camera \(K = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\) .</dd>
|
|
<dd><code>cameraMatrix2</code> - Camera matrix for the second camera \(K = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\) .</dd>
|
|
<dd><code>distCoeffs1</code> - Input vector of distortion coefficients for the first camera
|
|
\((k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\)
|
|
of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.</dd>
|
|
<dd><code>distCoeffs2</code> - Input vector of distortion coefficients for the second camera
|
|
\((k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\)
|
|
of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.</dd>
|
|
<dd><code>method</code> - Method for computing an essential matrix.
|
|
<ul>
|
|
<li>
|
|
REF: RANSAC for the RANSAC algorithm.
|
|
</li>
|
|
<li>
|
|
REF: LMEDS for the LMedS algorithm.
|
|
</li>
|
|
</ul>
|
|
confidence (probability) that the estimated matrix is correct.
|
|
line in pixels, beyond which the point is considered an outlier and is not used for computing the
|
|
final fundamental matrix. It can be set to something like 1-3, depending on the accuracy of the
|
|
point localization, image resolution, and the image noise.
|
|
for the other points. The array is computed only in the RANSAC and LMedS methods.
|
|
|
|
This function estimates essential matrix based on the five-point algorithm solver in CITE: Nister03 .
|
|
CITE: SteweniusCFS is also a related. The epipolar geometry is described by the following equation:
|
|
|
|
\([p_2; 1]^T K^{-T} E K^{-1} [p_1; 1] = 0\)
|
|
|
|
where \(E\) is an essential matrix, \(p_1\) and \(p_2\) are corresponding points in the first and the
|
|
second images, respectively. The result of this function may be passed further to
|
|
#decomposeEssentialMat or #recoverPose to recover the relative pose between cameras.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="findEssentialMat(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>findEssentialMat</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">findEssentialMat</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs2)</span></div>
|
|
<div class="block">Calculates an essential matrix from the corresponding points in two images from potentially two different cameras.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>points1</code> - Array of N (N >= 5) 2D points from the first image. The point coordinates should
|
|
be floating-point (single or double precision).</dd>
|
|
<dd><code>points2</code> - Array of the second image points of the same size and format as points1.</dd>
|
|
<dd><code>cameraMatrix1</code> - Camera matrix for the first camera \(K = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\) .</dd>
|
|
<dd><code>cameraMatrix2</code> - Camera matrix for the second camera \(K = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\) .</dd>
|
|
<dd><code>distCoeffs1</code> - Input vector of distortion coefficients for the first camera
|
|
\((k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\)
|
|
of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.</dd>
|
|
<dd><code>distCoeffs2</code> - Input vector of distortion coefficients for the second camera
|
|
\((k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\)
|
|
of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.
|
|
<ul>
|
|
<li>
|
|
REF: RANSAC for the RANSAC algorithm.
|
|
</li>
|
|
<li>
|
|
REF: LMEDS for the LMedS algorithm.
|
|
</li>
|
|
</ul>
|
|
confidence (probability) that the estimated matrix is correct.
|
|
line in pixels, beyond which the point is considered an outlier and is not used for computing the
|
|
final fundamental matrix. It can be set to something like 1-3, depending on the accuracy of the
|
|
point localization, image resolution, and the image noise.
|
|
for the other points. The array is computed only in the RANSAC and LMedS methods.
|
|
|
|
This function estimates essential matrix based on the five-point algorithm solver in CITE: Nister03 .
|
|
CITE: SteweniusCFS is also a related. The epipolar geometry is described by the following equation:
|
|
|
|
\([p_2; 1]^T K^{-T} E K^{-1} [p_1; 1] = 0\)
|
|
|
|
where \(E\) is an essential matrix, \(p_1\) and \(p_2\) are corresponding points in the first and the
|
|
second images, respectively. The result of this function may be passed further to
|
|
#decomposeEssentialMat or #recoverPose to recover the relative pose between cameras.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="findEssentialMat(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.calib3d.UsacParams)">
|
|
<h3>findEssentialMat</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">findEssentialMat</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dist_coeff1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dist_coeff2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> mask,
|
|
<a href="UsacParams.html" title="class in org.opencv.calib3d">UsacParams</a> params)</span></div>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="decomposeEssentialMat(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>decomposeEssentialMat</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">decomposeEssentialMat</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> E,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> t)</span></div>
|
|
<div class="block">Decompose an essential matrix to possible rotations and translation.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>E</code> - The input essential matrix.</dd>
|
|
<dd><code>R1</code> - One possible rotation matrix.</dd>
|
|
<dd><code>R2</code> - Another possible rotation matrix.</dd>
|
|
<dd><code>t</code> - One possible translation.
|
|
|
|
This function decomposes the essential matrix E using svd decomposition CITE: HartleyZ00. In
|
|
general, four possible poses exist for the decomposition of E. They are \([R_1, t]\),
|
|
\([R_1, -t]\), \([R_2, t]\), \([R_2, -t]\).
|
|
|
|
If E gives the epipolar constraint \([p_2; 1]^T A^{-T} E A^{-1} [p_1; 1] = 0\) between the image
|
|
points \(p_1\) in the first image and \(p_2\) in second image, then any of the tuples
|
|
\([R_1, t]\), \([R_1, -t]\), \([R_2, t]\), \([R_2, -t]\) is a change of basis from the first
|
|
camera's coordinate system to the second camera's coordinate system. However, by decomposing E, one
|
|
can only get the direction of the translation. For this reason, the translation t is returned with
|
|
unit length.</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="recoverPose(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,double,double,org.opencv.core.Mat)">
|
|
<h3>recoverPose</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">int</span> <span class="element-name">recoverPose</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> E,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> t,
|
|
int method,
|
|
double prob,
|
|
double threshold,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> mask)</span></div>
|
|
<div class="block">Recovers the relative camera rotation and the translation from corresponding points in two images from two different cameras, using cheirality check. Returns the number of
|
|
inliers that pass the check.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>points1</code> - Array of N 2D points from the first image. The point coordinates should be
|
|
floating-point (single or double precision).</dd>
|
|
<dd><code>points2</code> - Array of the second image points of the same size and format as points1 .</dd>
|
|
<dd><code>cameraMatrix1</code> - Input/output camera matrix for the first camera, the same as in
|
|
REF: calibrateCamera. Furthermore, for the stereo case, additional flags may be used, see below.</dd>
|
|
<dd><code>distCoeffs1</code> - Input/output vector of distortion coefficients, the same as in
|
|
REF: calibrateCamera.</dd>
|
|
<dd><code>cameraMatrix2</code> - Input/output camera matrix for the first camera, the same as in
|
|
REF: calibrateCamera. Furthermore, for the stereo case, additional flags may be used, see below.</dd>
|
|
<dd><code>distCoeffs2</code> - Input/output vector of distortion coefficients, the same as in
|
|
REF: calibrateCamera.</dd>
|
|
<dd><code>E</code> - The output essential matrix.</dd>
|
|
<dd><code>R</code> - Output rotation matrix. Together with the translation vector, this matrix makes up a tuple
|
|
that performs a change of basis from the first camera's coordinate system to the second camera's
|
|
coordinate system. Note that, in general, t can not be used for this tuple, see the parameter
|
|
described below.</dd>
|
|
<dd><code>t</code> - Output translation vector. This vector is obtained by REF: decomposeEssentialMat and
|
|
therefore is only known up to scale, i.e. t is the direction of the translation vector and has unit
|
|
length.</dd>
|
|
<dd><code>method</code> - Method for computing an essential matrix.
|
|
<ul>
|
|
<li>
|
|
REF: RANSAC for the RANSAC algorithm.
|
|
</li>
|
|
<li>
|
|
REF: LMEDS for the LMedS algorithm.
|
|
</li>
|
|
</ul></dd>
|
|
<dd><code>prob</code> - Parameter used for the RANSAC or LMedS methods only. It specifies a desirable level of
|
|
confidence (probability) that the estimated matrix is correct.</dd>
|
|
<dd><code>threshold</code> - Parameter used for RANSAC. It is the maximum distance from a point to an epipolar
|
|
line in pixels, beyond which the point is considered an outlier and is not used for computing the
|
|
final fundamental matrix. It can be set to something like 1-3, depending on the accuracy of the
|
|
point localization, image resolution, and the image noise.</dd>
|
|
<dd><code>mask</code> - Input/output mask for inliers in points1 and points2. If it is not empty, then it marks
|
|
inliers in points1 and points2 for then given essential matrix E. Only these inliers will be used to
|
|
recover pose. In the output mask only inliers which pass the cheirality check.
|
|
|
|
This function decomposes an essential matrix using REF: decomposeEssentialMat and then verifies
|
|
possible pose hypotheses by doing cheirality check. The cheirality check means that the
|
|
triangulated 3D points should have positive depth. Some details can be found in CITE: Nister03.
|
|
|
|
This function can be used to process the output E and mask from REF: findEssentialMat. In this
|
|
scenario, points1 and points2 are the same input for findEssentialMat.:
|
|
<code>
|
|
// Example. Estimation of fundamental matrix using the RANSAC algorithm
|
|
int point_count = 100;
|
|
vector<Point2f> points1(point_count);
|
|
vector<Point2f> points2(point_count);
|
|
|
|
// initialize the points here ...
|
|
for( int i = 0; i < point_count; i++ )
|
|
{
|
|
points1[i] = ...;
|
|
points2[i] = ...;
|
|
}
|
|
|
|
// Input: camera calibration of both cameras, for example using intrinsic chessboard calibration.
|
|
Mat cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2;
|
|
|
|
// Output: Essential matrix, relative rotation and relative translation.
|
|
Mat E, R, t, mask;
|
|
|
|
recoverPose(points1, points2, cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2, E, R, t, mask);
|
|
</code></dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="recoverPose(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,double,double)">
|
|
<h3>recoverPose</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">int</span> <span class="element-name">recoverPose</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> E,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> t,
|
|
int method,
|
|
double prob,
|
|
double threshold)</span></div>
|
|
<div class="block">Recovers the relative camera rotation and the translation from corresponding points in two images from two different cameras, using cheirality check. Returns the number of
|
|
inliers that pass the check.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>points1</code> - Array of N 2D points from the first image. The point coordinates should be
|
|
floating-point (single or double precision).</dd>
|
|
<dd><code>points2</code> - Array of the second image points of the same size and format as points1 .</dd>
|
|
<dd><code>cameraMatrix1</code> - Input/output camera matrix for the first camera, the same as in
|
|
REF: calibrateCamera. Furthermore, for the stereo case, additional flags may be used, see below.</dd>
|
|
<dd><code>distCoeffs1</code> - Input/output vector of distortion coefficients, the same as in
|
|
REF: calibrateCamera.</dd>
|
|
<dd><code>cameraMatrix2</code> - Input/output camera matrix for the first camera, the same as in
|
|
REF: calibrateCamera. Furthermore, for the stereo case, additional flags may be used, see below.</dd>
|
|
<dd><code>distCoeffs2</code> - Input/output vector of distortion coefficients, the same as in
|
|
REF: calibrateCamera.</dd>
|
|
<dd><code>E</code> - The output essential matrix.</dd>
|
|
<dd><code>R</code> - Output rotation matrix. Together with the translation vector, this matrix makes up a tuple
|
|
that performs a change of basis from the first camera's coordinate system to the second camera's
|
|
coordinate system. Note that, in general, t can not be used for this tuple, see the parameter
|
|
described below.</dd>
|
|
<dd><code>t</code> - Output translation vector. This vector is obtained by REF: decomposeEssentialMat and
|
|
therefore is only known up to scale, i.e. t is the direction of the translation vector and has unit
|
|
length.</dd>
|
|
<dd><code>method</code> - Method for computing an essential matrix.
|
|
<ul>
|
|
<li>
|
|
REF: RANSAC for the RANSAC algorithm.
|
|
</li>
|
|
<li>
|
|
REF: LMEDS for the LMedS algorithm.
|
|
</li>
|
|
</ul></dd>
|
|
<dd><code>prob</code> - Parameter used for the RANSAC or LMedS methods only. It specifies a desirable level of
|
|
confidence (probability) that the estimated matrix is correct.</dd>
|
|
<dd><code>threshold</code> - Parameter used for RANSAC. It is the maximum distance from a point to an epipolar
|
|
line in pixels, beyond which the point is considered an outlier and is not used for computing the
|
|
final fundamental matrix. It can be set to something like 1-3, depending on the accuracy of the
|
|
point localization, image resolution, and the image noise.
|
|
inliers in points1 and points2 for then given essential matrix E. Only these inliers will be used to
|
|
recover pose. In the output mask only inliers which pass the cheirality check.
|
|
|
|
This function decomposes an essential matrix using REF: decomposeEssentialMat and then verifies
|
|
possible pose hypotheses by doing cheirality check. The cheirality check means that the
|
|
triangulated 3D points should have positive depth. Some details can be found in CITE: Nister03.
|
|
|
|
This function can be used to process the output E and mask from REF: findEssentialMat. In this
|
|
scenario, points1 and points2 are the same input for findEssentialMat.:
|
|
<code>
|
|
// Example. Estimation of fundamental matrix using the RANSAC algorithm
|
|
int point_count = 100;
|
|
vector<Point2f> points1(point_count);
|
|
vector<Point2f> points2(point_count);
|
|
|
|
// initialize the points here ...
|
|
for( int i = 0; i < point_count; i++ )
|
|
{
|
|
points1[i] = ...;
|
|
points2[i] = ...;
|
|
}
|
|
|
|
// Input: camera calibration of both cameras, for example using intrinsic chessboard calibration.
|
|
Mat cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2;
|
|
|
|
// Output: Essential matrix, relative rotation and relative translation.
|
|
Mat E, R, t, mask;
|
|
|
|
recoverPose(points1, points2, cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2, E, R, t, mask);
|
|
</code></dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="recoverPose(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,double)">
|
|
<h3>recoverPose</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">int</span> <span class="element-name">recoverPose</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> E,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> t,
|
|
int method,
|
|
double prob)</span></div>
|
|
<div class="block">Recovers the relative camera rotation and the translation from corresponding points in two images from two different cameras, using cheirality check. Returns the number of
|
|
inliers that pass the check.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>points1</code> - Array of N 2D points from the first image. The point coordinates should be
|
|
floating-point (single or double precision).</dd>
|
|
<dd><code>points2</code> - Array of the second image points of the same size and format as points1 .</dd>
|
|
<dd><code>cameraMatrix1</code> - Input/output camera matrix for the first camera, the same as in
|
|
REF: calibrateCamera. Furthermore, for the stereo case, additional flags may be used, see below.</dd>
|
|
<dd><code>distCoeffs1</code> - Input/output vector of distortion coefficients, the same as in
|
|
REF: calibrateCamera.</dd>
|
|
<dd><code>cameraMatrix2</code> - Input/output camera matrix for the first camera, the same as in
|
|
REF: calibrateCamera. Furthermore, for the stereo case, additional flags may be used, see below.</dd>
|
|
<dd><code>distCoeffs2</code> - Input/output vector of distortion coefficients, the same as in
|
|
REF: calibrateCamera.</dd>
|
|
<dd><code>E</code> - The output essential matrix.</dd>
|
|
<dd><code>R</code> - Output rotation matrix. Together with the translation vector, this matrix makes up a tuple
|
|
that performs a change of basis from the first camera's coordinate system to the second camera's
|
|
coordinate system. Note that, in general, t can not be used for this tuple, see the parameter
|
|
described below.</dd>
|
|
<dd><code>t</code> - Output translation vector. This vector is obtained by REF: decomposeEssentialMat and
|
|
therefore is only known up to scale, i.e. t is the direction of the translation vector and has unit
|
|
length.</dd>
|
|
<dd><code>method</code> - Method for computing an essential matrix.
|
|
<ul>
|
|
<li>
|
|
REF: RANSAC for the RANSAC algorithm.
|
|
</li>
|
|
<li>
|
|
REF: LMEDS for the LMedS algorithm.
|
|
</li>
|
|
</ul></dd>
|
|
<dd><code>prob</code> - Parameter used for the RANSAC or LMedS methods only. It specifies a desirable level of
|
|
confidence (probability) that the estimated matrix is correct.
|
|
line in pixels, beyond which the point is considered an outlier and is not used for computing the
|
|
final fundamental matrix. It can be set to something like 1-3, depending on the accuracy of the
|
|
point localization, image resolution, and the image noise.
|
|
inliers in points1 and points2 for then given essential matrix E. Only these inliers will be used to
|
|
recover pose. In the output mask only inliers which pass the cheirality check.
|
|
|
|
This function decomposes an essential matrix using REF: decomposeEssentialMat and then verifies
|
|
possible pose hypotheses by doing cheirality check. The cheirality check means that the
|
|
triangulated 3D points should have positive depth. Some details can be found in CITE: Nister03.
|
|
|
|
This function can be used to process the output E and mask from REF: findEssentialMat. In this
|
|
scenario, points1 and points2 are the same input for findEssentialMat.:
|
|
<code>
|
|
// Example. Estimation of fundamental matrix using the RANSAC algorithm
|
|
int point_count = 100;
|
|
vector<Point2f> points1(point_count);
|
|
vector<Point2f> points2(point_count);
|
|
|
|
// initialize the points here ...
|
|
for( int i = 0; i < point_count; i++ )
|
|
{
|
|
points1[i] = ...;
|
|
points2[i] = ...;
|
|
}
|
|
|
|
// Input: camera calibration of both cameras, for example using intrinsic chessboard calibration.
|
|
Mat cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2;
|
|
|
|
// Output: Essential matrix, relative rotation and relative translation.
|
|
Mat E, R, t, mask;
|
|
|
|
recoverPose(points1, points2, cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2, E, R, t, mask);
|
|
</code></dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="recoverPose(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int)">
|
|
<h3>recoverPose</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">int</span> <span class="element-name">recoverPose</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> E,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> t,
|
|
int method)</span></div>
|
|
<div class="block">Recovers the relative camera rotation and the translation from corresponding points in two images from two different cameras, using cheirality check. Returns the number of
|
|
inliers that pass the check.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>points1</code> - Array of N 2D points from the first image. The point coordinates should be
|
|
floating-point (single or double precision).</dd>
|
|
<dd><code>points2</code> - Array of the second image points of the same size and format as points1 .</dd>
|
|
<dd><code>cameraMatrix1</code> - Input/output camera matrix for the first camera, the same as in
|
|
REF: calibrateCamera. Furthermore, for the stereo case, additional flags may be used, see below.</dd>
|
|
<dd><code>distCoeffs1</code> - Input/output vector of distortion coefficients, the same as in
|
|
REF: calibrateCamera.</dd>
|
|
<dd><code>cameraMatrix2</code> - Input/output camera matrix for the first camera, the same as in
|
|
REF: calibrateCamera. Furthermore, for the stereo case, additional flags may be used, see below.</dd>
|
|
<dd><code>distCoeffs2</code> - Input/output vector of distortion coefficients, the same as in
|
|
REF: calibrateCamera.</dd>
|
|
<dd><code>E</code> - The output essential matrix.</dd>
|
|
<dd><code>R</code> - Output rotation matrix. Together with the translation vector, this matrix makes up a tuple
|
|
that performs a change of basis from the first camera's coordinate system to the second camera's
|
|
coordinate system. Note that, in general, t can not be used for this tuple, see the parameter
|
|
described below.</dd>
|
|
<dd><code>t</code> - Output translation vector. This vector is obtained by REF: decomposeEssentialMat and
|
|
therefore is only known up to scale, i.e. t is the direction of the translation vector and has unit
|
|
length.</dd>
|
|
<dd><code>method</code> - Method for computing an essential matrix.
|
|
<ul>
|
|
<li>
|
|
REF: RANSAC for the RANSAC algorithm.
|
|
</li>
|
|
<li>
|
|
REF: LMEDS for the LMedS algorithm.
|
|
</li>
|
|
</ul>
|
|
confidence (probability) that the estimated matrix is correct.
|
|
line in pixels, beyond which the point is considered an outlier and is not used for computing the
|
|
final fundamental matrix. It can be set to something like 1-3, depending on the accuracy of the
|
|
point localization, image resolution, and the image noise.
|
|
inliers in points1 and points2 for then given essential matrix E. Only these inliers will be used to
|
|
recover pose. In the output mask only inliers which pass the cheirality check.
|
|
|
|
This function decomposes an essential matrix using REF: decomposeEssentialMat and then verifies
|
|
possible pose hypotheses by doing cheirality check. The cheirality check means that the
|
|
triangulated 3D points should have positive depth. Some details can be found in CITE: Nister03.
|
|
|
|
This function can be used to process the output E and mask from REF: findEssentialMat. In this
|
|
scenario, points1 and points2 are the same input for findEssentialMat.:
|
|
<code>
|
|
// Example. Estimation of fundamental matrix using the RANSAC algorithm
|
|
int point_count = 100;
|
|
vector<Point2f> points1(point_count);
|
|
vector<Point2f> points2(point_count);
|
|
|
|
// initialize the points here ...
|
|
for( int i = 0; i < point_count; i++ )
|
|
{
|
|
points1[i] = ...;
|
|
points2[i] = ...;
|
|
}
|
|
|
|
// Input: camera calibration of both cameras, for example using intrinsic chessboard calibration.
|
|
Mat cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2;
|
|
|
|
// Output: Essential matrix, relative rotation and relative translation.
|
|
Mat E, R, t, mask;
|
|
|
|
recoverPose(points1, points2, cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2, E, R, t, mask);
|
|
</code></dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="recoverPose(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>recoverPose</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">int</span> <span class="element-name">recoverPose</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> E,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> t)</span></div>
|
|
<div class="block">Recovers the relative camera rotation and the translation from corresponding points in two images from two different cameras, using cheirality check. Returns the number of
|
|
inliers that pass the check.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>points1</code> - Array of N 2D points from the first image. The point coordinates should be
|
|
floating-point (single or double precision).</dd>
|
|
<dd><code>points2</code> - Array of the second image points of the same size and format as points1 .</dd>
|
|
<dd><code>cameraMatrix1</code> - Input/output camera matrix for the first camera, the same as in
|
|
REF: calibrateCamera. Furthermore, for the stereo case, additional flags may be used, see below.</dd>
|
|
<dd><code>distCoeffs1</code> - Input/output vector of distortion coefficients, the same as in
|
|
REF: calibrateCamera.</dd>
|
|
<dd><code>cameraMatrix2</code> - Input/output camera matrix for the first camera, the same as in
|
|
REF: calibrateCamera. Furthermore, for the stereo case, additional flags may be used, see below.</dd>
|
|
<dd><code>distCoeffs2</code> - Input/output vector of distortion coefficients, the same as in
|
|
REF: calibrateCamera.</dd>
|
|
<dd><code>E</code> - The output essential matrix.</dd>
|
|
<dd><code>R</code> - Output rotation matrix. Together with the translation vector, this matrix makes up a tuple
|
|
that performs a change of basis from the first camera's coordinate system to the second camera's
|
|
coordinate system. Note that, in general, t can not be used for this tuple, see the parameter
|
|
described below.</dd>
|
|
<dd><code>t</code> - Output translation vector. This vector is obtained by REF: decomposeEssentialMat and
|
|
therefore is only known up to scale, i.e. t is the direction of the translation vector and has unit
|
|
length.
|
|
<ul>
|
|
<li>
|
|
REF: RANSAC for the RANSAC algorithm.
|
|
</li>
|
|
<li>
|
|
REF: LMEDS for the LMedS algorithm.
|
|
</li>
|
|
</ul>
|
|
confidence (probability) that the estimated matrix is correct.
|
|
line in pixels, beyond which the point is considered an outlier and is not used for computing the
|
|
final fundamental matrix. It can be set to something like 1-3, depending on the accuracy of the
|
|
point localization, image resolution, and the image noise.
|
|
inliers in points1 and points2 for then given essential matrix E. Only these inliers will be used to
|
|
recover pose. In the output mask only inliers which pass the cheirality check.
|
|
|
|
This function decomposes an essential matrix using REF: decomposeEssentialMat and then verifies
|
|
possible pose hypotheses by doing cheirality check. The cheirality check means that the
|
|
triangulated 3D points should have positive depth. Some details can be found in CITE: Nister03.
|
|
|
|
This function can be used to process the output E and mask from REF: findEssentialMat. In this
|
|
scenario, points1 and points2 are the same input for findEssentialMat.:
|
|
<code>
|
|
// Example. Estimation of fundamental matrix using the RANSAC algorithm
|
|
int point_count = 100;
|
|
vector<Point2f> points1(point_count);
|
|
vector<Point2f> points2(point_count);
|
|
|
|
// initialize the points here ...
|
|
for( int i = 0; i < point_count; i++ )
|
|
{
|
|
points1[i] = ...;
|
|
points2[i] = ...;
|
|
}
|
|
|
|
// Input: camera calibration of both cameras, for example using intrinsic chessboard calibration.
|
|
Mat cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2;
|
|
|
|
// Output: Essential matrix, relative rotation and relative translation.
|
|
Mat E, R, t, mask;
|
|
|
|
recoverPose(points1, points2, cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2, E, R, t, mask);
|
|
</code></dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="recoverPose(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>recoverPose</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">int</span> <span class="element-name">recoverPose</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> E,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> t,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> mask)</span></div>
|
|
<div class="block">Recovers the relative camera rotation and the translation from an estimated essential
|
|
matrix and the corresponding points in two images, using chirality check. Returns the number of
|
|
inliers that pass the check.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>E</code> - The input essential matrix.</dd>
|
|
<dd><code>points1</code> - Array of N 2D points from the first image. The point coordinates should be
|
|
floating-point (single or double precision).</dd>
|
|
<dd><code>points2</code> - Array of the second image points of the same size and format as points1 .</dd>
|
|
<dd><code>cameraMatrix</code> - Camera intrinsic matrix \(\cameramatrix{A}\) .
|
|
Note that this function assumes that points1 and points2 are feature points from cameras with the
|
|
same camera intrinsic matrix.</dd>
|
|
<dd><code>R</code> - Output rotation matrix. Together with the translation vector, this matrix makes up a tuple
|
|
that performs a change of basis from the first camera's coordinate system to the second camera's
|
|
coordinate system. Note that, in general, t can not be used for this tuple, see the parameter
|
|
described below.</dd>
|
|
<dd><code>t</code> - Output translation vector. This vector is obtained by REF: decomposeEssentialMat and
|
|
therefore is only known up to scale, i.e. t is the direction of the translation vector and has unit
|
|
length.</dd>
|
|
<dd><code>mask</code> - Input/output mask for inliers in points1 and points2. If it is not empty, then it marks
|
|
inliers in points1 and points2 for the given essential matrix E. Only these inliers will be used to
|
|
recover pose. In the output mask only inliers which pass the chirality check.
|
|
|
|
This function decomposes an essential matrix using REF: decomposeEssentialMat and then verifies
|
|
possible pose hypotheses by doing chirality check. The chirality check means that the
|
|
triangulated 3D points should have positive depth. Some details can be found in CITE: Nister03.
|
|
|
|
This function can be used to process the output E and mask from REF: findEssentialMat. In this
|
|
scenario, points1 and points2 are the same input for #findEssentialMat :
|
|
<code>
|
|
// Example. Estimation of fundamental matrix using the RANSAC algorithm
|
|
int point_count = 100;
|
|
vector<Point2f> points1(point_count);
|
|
vector<Point2f> points2(point_count);
|
|
|
|
// initialize the points here ...
|
|
for( int i = 0; i < point_count; i++ )
|
|
{
|
|
points1[i] = ...;
|
|
points2[i] = ...;
|
|
}
|
|
|
|
// cametra matrix with both focal lengths = 1, and principal point = (0, 0)
|
|
Mat cameraMatrix = Mat::eye(3, 3, CV_64F);
|
|
|
|
Mat E, R, t, mask;
|
|
|
|
E = findEssentialMat(points1, points2, cameraMatrix, RANSAC, 0.999, 1.0, mask);
|
|
recoverPose(E, points1, points2, cameraMatrix, R, t, mask);
|
|
</code></dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="recoverPose(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>recoverPose</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">int</span> <span class="element-name">recoverPose</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> E,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> t)</span></div>
|
|
<div class="block">Recovers the relative camera rotation and the translation from an estimated essential
|
|
matrix and the corresponding points in two images, using chirality check. Returns the number of
|
|
inliers that pass the check.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>E</code> - The input essential matrix.</dd>
|
|
<dd><code>points1</code> - Array of N 2D points from the first image. The point coordinates should be
|
|
floating-point (single or double precision).</dd>
|
|
<dd><code>points2</code> - Array of the second image points of the same size and format as points1 .</dd>
|
|
<dd><code>cameraMatrix</code> - Camera intrinsic matrix \(\cameramatrix{A}\) .
|
|
Note that this function assumes that points1 and points2 are feature points from cameras with the
|
|
same camera intrinsic matrix.</dd>
|
|
<dd><code>R</code> - Output rotation matrix. Together with the translation vector, this matrix makes up a tuple
|
|
that performs a change of basis from the first camera's coordinate system to the second camera's
|
|
coordinate system. Note that, in general, t can not be used for this tuple, see the parameter
|
|
described below.</dd>
|
|
<dd><code>t</code> - Output translation vector. This vector is obtained by REF: decomposeEssentialMat and
|
|
therefore is only known up to scale, i.e. t is the direction of the translation vector and has unit
|
|
length.
|
|
inliers in points1 and points2 for the given essential matrix E. Only these inliers will be used to
|
|
recover pose. In the output mask only inliers which pass the chirality check.
|
|
|
|
This function decomposes an essential matrix using REF: decomposeEssentialMat and then verifies
|
|
possible pose hypotheses by doing chirality check. The chirality check means that the
|
|
triangulated 3D points should have positive depth. Some details can be found in CITE: Nister03.
|
|
|
|
This function can be used to process the output E and mask from REF: findEssentialMat. In this
|
|
scenario, points1 and points2 are the same input for #findEssentialMat :
|
|
<code>
|
|
// Example. Estimation of fundamental matrix using the RANSAC algorithm
|
|
int point_count = 100;
|
|
vector<Point2f> points1(point_count);
|
|
vector<Point2f> points2(point_count);
|
|
|
|
// initialize the points here ...
|
|
for( int i = 0; i < point_count; i++ )
|
|
{
|
|
points1[i] = ...;
|
|
points2[i] = ...;
|
|
}
|
|
|
|
// cametra matrix with both focal lengths = 1, and principal point = (0, 0)
|
|
Mat cameraMatrix = Mat::eye(3, 3, CV_64F);
|
|
|
|
Mat E, R, t, mask;
|
|
|
|
E = findEssentialMat(points1, points2, cameraMatrix, RANSAC, 0.999, 1.0, mask);
|
|
recoverPose(E, points1, points2, cameraMatrix, R, t, mask);
|
|
</code></dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="recoverPose(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,double,org.opencv.core.Point,org.opencv.core.Mat)">
|
|
<h3>recoverPose</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">int</span> <span class="element-name">recoverPose</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> E,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> t,
|
|
double focal,
|
|
<a href="../core/Point.html" title="class in org.opencv.core">Point</a> pp,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> mask)</span></div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>E</code> - The input essential matrix.</dd>
|
|
<dd><code>points1</code> - Array of N 2D points from the first image. The point coordinates should be
|
|
floating-point (single or double precision).</dd>
|
|
<dd><code>points2</code> - Array of the second image points of the same size and format as points1 .</dd>
|
|
<dd><code>R</code> - Output rotation matrix. Together with the translation vector, this matrix makes up a tuple
|
|
that performs a change of basis from the first camera's coordinate system to the second camera's
|
|
coordinate system. Note that, in general, t can not be used for this tuple, see the parameter
|
|
description below.</dd>
|
|
<dd><code>t</code> - Output translation vector. This vector is obtained by REF: decomposeEssentialMat and
|
|
therefore is only known up to scale, i.e. t is the direction of the translation vector and has unit
|
|
length.</dd>
|
|
<dd><code>focal</code> - Focal length of the camera. Note that this function assumes that points1 and points2
|
|
are feature points from cameras with same focal length and principal point.</dd>
|
|
<dd><code>pp</code> - principal point of the camera.</dd>
|
|
<dd><code>mask</code> - Input/output mask for inliers in points1 and points2. If it is not empty, then it marks
|
|
inliers in points1 and points2 for the given essential matrix E. Only these inliers will be used to
|
|
recover pose. In the output mask only inliers which pass the chirality check.
|
|
|
|
This function differs from the one above that it computes camera intrinsic matrix from focal length and
|
|
principal point:
|
|
|
|
\(A =
|
|
\begin{bmatrix}
|
|
f & 0 & x_{pp} \\
|
|
0 & f & y_{pp} \\
|
|
0 & 0 & 1
|
|
\end{bmatrix}\)</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="recoverPose(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,double,org.opencv.core.Point)">
|
|
<h3>recoverPose</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">int</span> <span class="element-name">recoverPose</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> E,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> t,
|
|
double focal,
|
|
<a href="../core/Point.html" title="class in org.opencv.core">Point</a> pp)</span></div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>E</code> - The input essential matrix.</dd>
|
|
<dd><code>points1</code> - Array of N 2D points from the first image. The point coordinates should be
|
|
floating-point (single or double precision).</dd>
|
|
<dd><code>points2</code> - Array of the second image points of the same size and format as points1 .</dd>
|
|
<dd><code>R</code> - Output rotation matrix. Together with the translation vector, this matrix makes up a tuple
|
|
that performs a change of basis from the first camera's coordinate system to the second camera's
|
|
coordinate system. Note that, in general, t can not be used for this tuple, see the parameter
|
|
description below.</dd>
|
|
<dd><code>t</code> - Output translation vector. This vector is obtained by REF: decomposeEssentialMat and
|
|
therefore is only known up to scale, i.e. t is the direction of the translation vector and has unit
|
|
length.</dd>
|
|
<dd><code>focal</code> - Focal length of the camera. Note that this function assumes that points1 and points2
|
|
are feature points from cameras with same focal length and principal point.</dd>
|
|
<dd><code>pp</code> - principal point of the camera.
|
|
inliers in points1 and points2 for the given essential matrix E. Only these inliers will be used to
|
|
recover pose. In the output mask only inliers which pass the chirality check.
|
|
|
|
This function differs from the one above that it computes camera intrinsic matrix from focal length and
|
|
principal point:
|
|
|
|
\(A =
|
|
\begin{bmatrix}
|
|
f & 0 & x_{pp} \\
|
|
0 & f & y_{pp} \\
|
|
0 & 0 & 1
|
|
\end{bmatrix}\)</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="recoverPose(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,double)">
|
|
<h3>recoverPose</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">int</span> <span class="element-name">recoverPose</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> E,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> t,
|
|
double focal)</span></div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>E</code> - The input essential matrix.</dd>
|
|
<dd><code>points1</code> - Array of N 2D points from the first image. The point coordinates should be
|
|
floating-point (single or double precision).</dd>
|
|
<dd><code>points2</code> - Array of the second image points of the same size and format as points1 .</dd>
|
|
<dd><code>R</code> - Output rotation matrix. Together with the translation vector, this matrix makes up a tuple
|
|
that performs a change of basis from the first camera's coordinate system to the second camera's
|
|
coordinate system. Note that, in general, t can not be used for this tuple, see the parameter
|
|
description below.</dd>
|
|
<dd><code>t</code> - Output translation vector. This vector is obtained by REF: decomposeEssentialMat and
|
|
therefore is only known up to scale, i.e. t is the direction of the translation vector and has unit
|
|
length.</dd>
|
|
<dd><code>focal</code> - Focal length of the camera. Note that this function assumes that points1 and points2
|
|
are feature points from cameras with same focal length and principal point.
|
|
inliers in points1 and points2 for the given essential matrix E. Only these inliers will be used to
|
|
recover pose. In the output mask only inliers which pass the chirality check.
|
|
|
|
This function differs from the one above that it computes camera intrinsic matrix from focal length and
|
|
principal point:
|
|
|
|
\(A =
|
|
\begin{bmatrix}
|
|
f & 0 & x_{pp} \\
|
|
0 & f & y_{pp} \\
|
|
0 & 0 & 1
|
|
\end{bmatrix}\)</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="recoverPose(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>recoverPose</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">int</span> <span class="element-name">recoverPose</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> E,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> t)</span></div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>E</code> - The input essential matrix.</dd>
|
|
<dd><code>points1</code> - Array of N 2D points from the first image. The point coordinates should be
|
|
floating-point (single or double precision).</dd>
|
|
<dd><code>points2</code> - Array of the second image points of the same size and format as points1 .</dd>
|
|
<dd><code>R</code> - Output rotation matrix. Together with the translation vector, this matrix makes up a tuple
|
|
that performs a change of basis from the first camera's coordinate system to the second camera's
|
|
coordinate system. Note that, in general, t can not be used for this tuple, see the parameter
|
|
description below.</dd>
|
|
<dd><code>t</code> - Output translation vector. This vector is obtained by REF: decomposeEssentialMat and
|
|
therefore is only known up to scale, i.e. t is the direction of the translation vector and has unit
|
|
length.
|
|
are feature points from cameras with same focal length and principal point.
|
|
inliers in points1 and points2 for the given essential matrix E. Only these inliers will be used to
|
|
recover pose. In the output mask only inliers which pass the chirality check.
|
|
|
|
This function differs from the one above that it computes camera intrinsic matrix from focal length and
|
|
principal point:
|
|
|
|
\(A =
|
|
\begin{bmatrix}
|
|
f & 0 & x_{pp} \\
|
|
0 & f & y_{pp} \\
|
|
0 & 0 & 1
|
|
\end{bmatrix}\)</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="recoverPose(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,double,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>recoverPose</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">int</span> <span class="element-name">recoverPose</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> E,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> t,
|
|
double distanceThresh,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> mask,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> triangulatedPoints)</span></div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>E</code> - The input essential matrix.</dd>
|
|
<dd><code>points1</code> - Array of N 2D points from the first image. The point coordinates should be
|
|
floating-point (single or double precision).</dd>
|
|
<dd><code>points2</code> - Array of the second image points of the same size and format as points1.</dd>
|
|
<dd><code>cameraMatrix</code> - Camera intrinsic matrix \(\cameramatrix{A}\) .
|
|
Note that this function assumes that points1 and points2 are feature points from cameras with the
|
|
same camera intrinsic matrix.</dd>
|
|
<dd><code>R</code> - Output rotation matrix. Together with the translation vector, this matrix makes up a tuple
|
|
that performs a change of basis from the first camera's coordinate system to the second camera's
|
|
coordinate system. Note that, in general, t can not be used for this tuple, see the parameter
|
|
description below.</dd>
|
|
<dd><code>t</code> - Output translation vector. This vector is obtained by REF: decomposeEssentialMat and
|
|
therefore is only known up to scale, i.e. t is the direction of the translation vector and has unit
|
|
length.</dd>
|
|
<dd><code>distanceThresh</code> - threshold distance which is used to filter out far away points (i.e. infinite
|
|
points).</dd>
|
|
<dd><code>mask</code> - Input/output mask for inliers in points1 and points2. If it is not empty, then it marks
|
|
inliers in points1 and points2 for the given essential matrix E. Only these inliers will be used to
|
|
recover pose. In the output mask only inliers which pass the chirality check.</dd>
|
|
<dd><code>triangulatedPoints</code> - 3D points which were reconstructed by triangulation.
|
|
|
|
This function differs from the one above that it outputs the triangulated 3D point that are used for
|
|
the chirality check.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="recoverPose(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,double,org.opencv.core.Mat)">
|
|
<h3>recoverPose</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">int</span> <span class="element-name">recoverPose</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> E,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> t,
|
|
double distanceThresh,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> mask)</span></div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>E</code> - The input essential matrix.</dd>
|
|
<dd><code>points1</code> - Array of N 2D points from the first image. The point coordinates should be
|
|
floating-point (single or double precision).</dd>
|
|
<dd><code>points2</code> - Array of the second image points of the same size and format as points1.</dd>
|
|
<dd><code>cameraMatrix</code> - Camera intrinsic matrix \(\cameramatrix{A}\) .
|
|
Note that this function assumes that points1 and points2 are feature points from cameras with the
|
|
same camera intrinsic matrix.</dd>
|
|
<dd><code>R</code> - Output rotation matrix. Together with the translation vector, this matrix makes up a tuple
|
|
that performs a change of basis from the first camera's coordinate system to the second camera's
|
|
coordinate system. Note that, in general, t can not be used for this tuple, see the parameter
|
|
description below.</dd>
|
|
<dd><code>t</code> - Output translation vector. This vector is obtained by REF: decomposeEssentialMat and
|
|
therefore is only known up to scale, i.e. t is the direction of the translation vector and has unit
|
|
length.</dd>
|
|
<dd><code>distanceThresh</code> - threshold distance which is used to filter out far away points (i.e. infinite
|
|
points).</dd>
|
|
<dd><code>mask</code> - Input/output mask for inliers in points1 and points2. If it is not empty, then it marks
|
|
inliers in points1 and points2 for the given essential matrix E. Only these inliers will be used to
|
|
recover pose. In the output mask only inliers which pass the chirality check.
|
|
|
|
This function differs from the one above that it outputs the triangulated 3D point that are used for
|
|
the chirality check.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="recoverPose(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,double)">
|
|
<h3>recoverPose</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">int</span> <span class="element-name">recoverPose</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> E,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> t,
|
|
double distanceThresh)</span></div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>E</code> - The input essential matrix.</dd>
|
|
<dd><code>points1</code> - Array of N 2D points from the first image. The point coordinates should be
|
|
floating-point (single or double precision).</dd>
|
|
<dd><code>points2</code> - Array of the second image points of the same size and format as points1.</dd>
|
|
<dd><code>cameraMatrix</code> - Camera intrinsic matrix \(\cameramatrix{A}\) .
|
|
Note that this function assumes that points1 and points2 are feature points from cameras with the
|
|
same camera intrinsic matrix.</dd>
|
|
<dd><code>R</code> - Output rotation matrix. Together with the translation vector, this matrix makes up a tuple
|
|
that performs a change of basis from the first camera's coordinate system to the second camera's
|
|
coordinate system. Note that, in general, t can not be used for this tuple, see the parameter
|
|
description below.</dd>
|
|
<dd><code>t</code> - Output translation vector. This vector is obtained by REF: decomposeEssentialMat and
|
|
therefore is only known up to scale, i.e. t is the direction of the translation vector and has unit
|
|
length.</dd>
|
|
<dd><code>distanceThresh</code> - threshold distance which is used to filter out far away points (i.e. infinite
|
|
points).
|
|
inliers in points1 and points2 for the given essential matrix E. Only these inliers will be used to
|
|
recover pose. In the output mask only inliers which pass the chirality check.
|
|
|
|
This function differs from the one above that it outputs the triangulated 3D point that are used for
|
|
the chirality check.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="computeCorrespondEpilines(org.opencv.core.Mat,int,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>computeCorrespondEpilines</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">computeCorrespondEpilines</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points,
|
|
int whichImage,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> F,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> lines)</span></div>
|
|
<div class="block">For points in an image of a stereo pair, computes the corresponding epilines in the other image.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>points</code> - Input points. \(N \times 1\) or \(1 \times N\) matrix of type CV_32FC2 or
|
|
vector<Point2f> .</dd>
|
|
<dd><code>whichImage</code> - Index of the image (1 or 2) that contains the points .</dd>
|
|
<dd><code>F</code> - Fundamental matrix that can be estimated using #findFundamentalMat or #stereoRectify .</dd>
|
|
<dd><code>lines</code> - Output vector of the epipolar lines corresponding to the points in the other image.
|
|
Each line \(ax + by + c=0\) is encoded by 3 numbers \((a, b, c)\) .
|
|
|
|
For every point in one of the two images of a stereo pair, the function finds the equation of the
|
|
corresponding epipolar line in the other image.
|
|
|
|
From the fundamental matrix definition (see #findFundamentalMat ), line \(l^{(2)}_i\) in the second
|
|
image for the point \(p^{(1)}_i\) in the first image (when whichImage=1 ) is computed as:
|
|
|
|
\(l^{(2)}_i = F p^{(1)}_i\)
|
|
|
|
And vice versa, when whichImage=2, \(l^{(1)}_i\) is computed from \(p^{(2)}_i\) as:
|
|
|
|
\(l^{(1)}_i = F^T p^{(2)}_i\)
|
|
|
|
Line coefficients are defined up to a scale. They are normalized so that \(a_i^2+b_i^2=1\) .</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="triangulatePoints(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>triangulatePoints</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">triangulatePoints</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> projMatr1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> projMatr2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> projPoints1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> projPoints2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points4D)</span></div>
|
|
<div class="block">This function reconstructs 3-dimensional points (in homogeneous coordinates) by using
|
|
their observations with a stereo camera.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>projMatr1</code> - 3x4 projection matrix of the first camera, i.e. this matrix projects 3D points
|
|
given in the world's coordinate system into the first image.</dd>
|
|
<dd><code>projMatr2</code> - 3x4 projection matrix of the second camera, i.e. this matrix projects 3D points
|
|
given in the world's coordinate system into the second image.</dd>
|
|
<dd><code>projPoints1</code> - 2xN array of feature points in the first image. In the case of the c++ version,
|
|
it can be also a vector of feature points or two-channel matrix of size 1xN or Nx1.</dd>
|
|
<dd><code>projPoints2</code> - 2xN array of corresponding points in the second image. In the case of the c++
|
|
version, it can be also a vector of feature points or two-channel matrix of size 1xN or Nx1.</dd>
|
|
<dd><code>points4D</code> - 4xN array of reconstructed points in homogeneous coordinates. These points are
|
|
returned in the world's coordinate system.
|
|
|
|
<b>Note:</b>
|
|
Keep in mind that all input data should be of float type in order for this function to work.
|
|
|
|
<b>Note:</b>
|
|
If the projection matrices from REF: stereoRectify are used, then the returned points are
|
|
represented in the first camera's rectified coordinate system.
|
|
|
|
SEE:
|
|
reprojectImageTo3D</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="correctMatches(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>correctMatches</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">correctMatches</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> F,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> points2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> newPoints1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> newPoints2)</span></div>
|
|
<div class="block">Refines coordinates of corresponding points.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>F</code> - 3x3 fundamental matrix.</dd>
|
|
<dd><code>points1</code> - 1xN array containing the first set of points.</dd>
|
|
<dd><code>points2</code> - 1xN array containing the second set of points.</dd>
|
|
<dd><code>newPoints1</code> - The optimized points1.</dd>
|
|
<dd><code>newPoints2</code> - The optimized points2.
|
|
|
|
The function implements the Optimal Triangulation Method (see Multiple View Geometry CITE: HartleyZ00 for details).
|
|
For each given point correspondence points1[i] <-> points2[i], and a fundamental matrix F, it
|
|
computes the corrected correspondences newPoints1[i] <-> newPoints2[i] that minimize the geometric
|
|
error \(d(points1[i], newPoints1[i])^2 + d(points2[i],newPoints2[i])^2\) (where \(d(a,b)\) is the
|
|
geometric distance between points \(a\) and \(b\) ) subject to the epipolar constraint
|
|
\(newPoints2^T \cdot F \cdot newPoints1 = 0\) .</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="filterSpeckles(org.opencv.core.Mat,double,int,double,org.opencv.core.Mat)">
|
|
<h3>filterSpeckles</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">filterSpeckles</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> img,
|
|
double newVal,
|
|
int maxSpeckleSize,
|
|
double maxDiff,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> buf)</span></div>
|
|
<div class="block">Filters off small noise blobs (speckles) in the disparity map</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>img</code> - The input 16-bit signed disparity image</dd>
|
|
<dd><code>newVal</code> - The disparity value used to paint-off the speckles</dd>
|
|
<dd><code>maxSpeckleSize</code> - The maximum speckle size to consider it a speckle. Larger blobs are not
|
|
affected by the algorithm</dd>
|
|
<dd><code>maxDiff</code> - Maximum difference between neighbor disparity pixels to put them into the same
|
|
blob. Note that since StereoBM, StereoSGBM and may be other algorithms return a fixed-point
|
|
disparity map, where disparity values are multiplied by 16, this scale factor should be taken into
|
|
account when specifying this parameter value.</dd>
|
|
<dd><code>buf</code> - The optional temporary buffer to avoid memory allocation within the function.</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="filterSpeckles(org.opencv.core.Mat,double,int,double)">
|
|
<h3>filterSpeckles</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">filterSpeckles</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> img,
|
|
double newVal,
|
|
int maxSpeckleSize,
|
|
double maxDiff)</span></div>
|
|
<div class="block">Filters off small noise blobs (speckles) in the disparity map</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>img</code> - The input 16-bit signed disparity image</dd>
|
|
<dd><code>newVal</code> - The disparity value used to paint-off the speckles</dd>
|
|
<dd><code>maxSpeckleSize</code> - The maximum speckle size to consider it a speckle. Larger blobs are not
|
|
affected by the algorithm</dd>
|
|
<dd><code>maxDiff</code> - Maximum difference between neighbor disparity pixels to put them into the same
|
|
blob. Note that since StereoBM, StereoSGBM and may be other algorithms return a fixed-point
|
|
disparity map, where disparity values are multiplied by 16, this scale factor should be taken into
|
|
account when specifying this parameter value.</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="getValidDisparityROI(org.opencv.core.Rect,org.opencv.core.Rect,int,int,int)">
|
|
<h3>getValidDisparityROI</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Rect.html" title="class in org.opencv.core">Rect</a></span> <span class="element-name">getValidDisparityROI</span><wbr><span class="parameters">(<a href="../core/Rect.html" title="class in org.opencv.core">Rect</a> roi1,
|
|
<a href="../core/Rect.html" title="class in org.opencv.core">Rect</a> roi2,
|
|
int minDisparity,
|
|
int numberOfDisparities,
|
|
int blockSize)</span></div>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="validateDisparity(org.opencv.core.Mat,org.opencv.core.Mat,int,int,int)">
|
|
<h3>validateDisparity</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">validateDisparity</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> disparity,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cost,
|
|
int minDisparity,
|
|
int numberOfDisparities,
|
|
int disp12MaxDisp)</span></div>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="validateDisparity(org.opencv.core.Mat,org.opencv.core.Mat,int,int)">
|
|
<h3>validateDisparity</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">validateDisparity</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> disparity,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cost,
|
|
int minDisparity,
|
|
int numberOfDisparities)</span></div>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="reprojectImageTo3D(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,boolean,int)">
|
|
<h3>reprojectImageTo3D</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">reprojectImageTo3D</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> disparity,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> _3dImage,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> Q,
|
|
boolean handleMissingValues,
|
|
int ddepth)</span></div>
|
|
<div class="block">Reprojects a disparity image to 3D space.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>disparity</code> - Input single-channel 8-bit unsigned, 16-bit signed, 32-bit signed or 32-bit
|
|
floating-point disparity image. The values of 8-bit / 16-bit signed formats are assumed to have no
|
|
fractional bits. If the disparity is 16-bit signed format, as computed by REF: StereoBM or
|
|
REF: StereoSGBM and maybe other algorithms, it should be divided by 16 (and scaled to float) before
|
|
being used here.</dd>
|
|
<dd><code>_3dImage</code> - Output 3-channel floating-point image of the same size as disparity. Each element of
|
|
_3dImage(x,y) contains 3D coordinates of the point (x,y) computed from the disparity map. If one
|
|
uses Q obtained by REF: stereoRectify, then the returned points are represented in the first
|
|
camera's rectified coordinate system.</dd>
|
|
<dd><code>Q</code> - \(4 \times 4\) perspective transformation matrix that can be obtained with
|
|
REF: stereoRectify.</dd>
|
|
<dd><code>handleMissingValues</code> - Indicates, whether the function should handle missing values (i.e.
|
|
points where the disparity was not computed). If handleMissingValues=true, then pixels with the
|
|
minimal disparity that corresponds to the outliers (see StereoMatcher::compute ) are transformed
|
|
to 3D points with a very large Z value (currently set to 10000).</dd>
|
|
<dd><code>ddepth</code> - The optional output array depth. If it is -1, the output image will have CV_32F
|
|
depth. ddepth can also be set to CV_16S, CV_32S or CV_32F.
|
|
|
|
The function transforms a single-channel disparity map to a 3-channel image representing a 3D
|
|
surface. That is, for each pixel (x,y) and the corresponding disparity d=disparity(x,y) , it
|
|
computes:
|
|
|
|
\(\begin{bmatrix}
|
|
X \\
|
|
Y \\
|
|
Z \\
|
|
W
|
|
\end{bmatrix} = Q \begin{bmatrix}
|
|
x \\
|
|
y \\
|
|
\texttt{disparity} (x,y) \\
|
|
1
|
|
\end{bmatrix}.\)
|
|
|
|
SEE:
|
|
To reproject a sparse set of points {(x,y,d),...} to 3D space, use perspectiveTransform.</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="reprojectImageTo3D(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,boolean)">
|
|
<h3>reprojectImageTo3D</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">reprojectImageTo3D</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> disparity,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> _3dImage,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> Q,
|
|
boolean handleMissingValues)</span></div>
|
|
<div class="block">Reprojects a disparity image to 3D space.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>disparity</code> - Input single-channel 8-bit unsigned, 16-bit signed, 32-bit signed or 32-bit
|
|
floating-point disparity image. The values of 8-bit / 16-bit signed formats are assumed to have no
|
|
fractional bits. If the disparity is 16-bit signed format, as computed by REF: StereoBM or
|
|
REF: StereoSGBM and maybe other algorithms, it should be divided by 16 (and scaled to float) before
|
|
being used here.</dd>
|
|
<dd><code>_3dImage</code> - Output 3-channel floating-point image of the same size as disparity. Each element of
|
|
_3dImage(x,y) contains 3D coordinates of the point (x,y) computed from the disparity map. If one
|
|
uses Q obtained by REF: stereoRectify, then the returned points are represented in the first
|
|
camera's rectified coordinate system.</dd>
|
|
<dd><code>Q</code> - \(4 \times 4\) perspective transformation matrix that can be obtained with
|
|
REF: stereoRectify.</dd>
|
|
<dd><code>handleMissingValues</code> - Indicates, whether the function should handle missing values (i.e.
|
|
points where the disparity was not computed). If handleMissingValues=true, then pixels with the
|
|
minimal disparity that corresponds to the outliers (see StereoMatcher::compute ) are transformed
|
|
to 3D points with a very large Z value (currently set to 10000).
|
|
depth. ddepth can also be set to CV_16S, CV_32S or CV_32F.
|
|
|
|
The function transforms a single-channel disparity map to a 3-channel image representing a 3D
|
|
surface. That is, for each pixel (x,y) and the corresponding disparity d=disparity(x,y) , it
|
|
computes:
|
|
|
|
\(\begin{bmatrix}
|
|
X \\
|
|
Y \\
|
|
Z \\
|
|
W
|
|
\end{bmatrix} = Q \begin{bmatrix}
|
|
x \\
|
|
y \\
|
|
\texttt{disparity} (x,y) \\
|
|
1
|
|
\end{bmatrix}.\)
|
|
|
|
SEE:
|
|
To reproject a sparse set of points {(x,y,d),...} to 3D space, use perspectiveTransform.</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="reprojectImageTo3D(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>reprojectImageTo3D</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">reprojectImageTo3D</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> disparity,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> _3dImage,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> Q)</span></div>
|
|
<div class="block">Reprojects a disparity image to 3D space.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>disparity</code> - Input single-channel 8-bit unsigned, 16-bit signed, 32-bit signed or 32-bit
|
|
floating-point disparity image. The values of 8-bit / 16-bit signed formats are assumed to have no
|
|
fractional bits. If the disparity is 16-bit signed format, as computed by REF: StereoBM or
|
|
REF: StereoSGBM and maybe other algorithms, it should be divided by 16 (and scaled to float) before
|
|
being used here.</dd>
|
|
<dd><code>_3dImage</code> - Output 3-channel floating-point image of the same size as disparity. Each element of
|
|
_3dImage(x,y) contains 3D coordinates of the point (x,y) computed from the disparity map. If one
|
|
uses Q obtained by REF: stereoRectify, then the returned points are represented in the first
|
|
camera's rectified coordinate system.</dd>
|
|
<dd><code>Q</code> - \(4 \times 4\) perspective transformation matrix that can be obtained with
|
|
REF: stereoRectify.
|
|
points where the disparity was not computed). If handleMissingValues=true, then pixels with the
|
|
minimal disparity that corresponds to the outliers (see StereoMatcher::compute ) are transformed
|
|
to 3D points with a very large Z value (currently set to 10000).
|
|
depth. ddepth can also be set to CV_16S, CV_32S or CV_32F.
|
|
|
|
The function transforms a single-channel disparity map to a 3-channel image representing a 3D
|
|
surface. That is, for each pixel (x,y) and the corresponding disparity d=disparity(x,y) , it
|
|
computes:
|
|
|
|
\(\begin{bmatrix}
|
|
X \\
|
|
Y \\
|
|
Z \\
|
|
W
|
|
\end{bmatrix} = Q \begin{bmatrix}
|
|
x \\
|
|
y \\
|
|
\texttt{disparity} (x,y) \\
|
|
1
|
|
\end{bmatrix}.\)
|
|
|
|
SEE:
|
|
To reproject a sparse set of points {(x,y,d),...} to 3D space, use perspectiveTransform.</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="sampsonDistance(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>sampsonDistance</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">double</span> <span class="element-name">sampsonDistance</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> pt1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> pt2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> F)</span></div>
|
|
<div class="block">Calculates the Sampson Distance between two points.
|
|
|
|
The function cv::sampsonDistance calculates and returns the first order approximation of the geometric error as:
|
|
\(
|
|
sd( \texttt{pt1} , \texttt{pt2} )=
|
|
\frac{(\texttt{pt2}^t \cdot \texttt{F} \cdot \texttt{pt1})^2}
|
|
{((\texttt{F} \cdot \texttt{pt1})(0))^2 +
|
|
((\texttt{F} \cdot \texttt{pt1})(1))^2 +
|
|
((\texttt{F}^t \cdot \texttt{pt2})(0))^2 +
|
|
((\texttt{F}^t \cdot \texttt{pt2})(1))^2}
|
|
\)
|
|
The fundamental matrix may be calculated using the #findFundamentalMat function. See CITE: HartleyZ00 11.4.3 for details.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>pt1</code> - first homogeneous 2d point</dd>
|
|
<dd><code>pt2</code> - second homogeneous 2d point</dd>
|
|
<dd><code>F</code> - fundamental matrix</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>The computed Sampson distance.</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="estimateAffine3D(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,double,double)">
|
|
<h3>estimateAffine3D</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">int</span> <span class="element-name">estimateAffine3D</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> src,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dst,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> out,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> inliers,
|
|
double ransacThreshold,
|
|
double confidence)</span></div>
|
|
<div class="block">Computes an optimal affine transformation between two 3D point sets.
|
|
|
|
It computes
|
|
\(
|
|
\begin{bmatrix}
|
|
x\\
|
|
y\\
|
|
z\\
|
|
\end{bmatrix}
|
|
=
|
|
\begin{bmatrix}
|
|
a_{11} & a_{12} & a_{13}\\
|
|
a_{21} & a_{22} & a_{23}\\
|
|
a_{31} & a_{32} & a_{33}\\
|
|
\end{bmatrix}
|
|
\begin{bmatrix}
|
|
X\\
|
|
Y\\
|
|
Z\\
|
|
\end{bmatrix}
|
|
+
|
|
\begin{bmatrix}
|
|
b_1\\
|
|
b_2\\
|
|
b_3\\
|
|
\end{bmatrix}
|
|
\)</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>src</code> - First input 3D point set containing \((X,Y,Z)\).</dd>
|
|
<dd><code>dst</code> - Second input 3D point set containing \((x,y,z)\).</dd>
|
|
<dd><code>out</code> - Output 3D affine transformation matrix \(3 \times 4\) of the form
|
|
\(
|
|
\begin{bmatrix}
|
|
a_{11} & a_{12} & a_{13} & b_1\\
|
|
a_{21} & a_{22} & a_{23} & b_2\\
|
|
a_{31} & a_{32} & a_{33} & b_3\\
|
|
\end{bmatrix}
|
|
\)</dd>
|
|
<dd><code>inliers</code> - Output vector indicating which points are inliers (1-inlier, 0-outlier).</dd>
|
|
<dd><code>ransacThreshold</code> - Maximum reprojection error in the RANSAC algorithm to consider a point as
|
|
an inlier.</dd>
|
|
<dd><code>confidence</code> - Confidence level, between 0 and 1, for the estimated transformation. Anything
|
|
between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation
|
|
significantly. Values lower than 0.8-0.9 can result in an incorrectly estimated transformation.
|
|
|
|
The function estimates an optimal 3D affine transformation between two 3D point sets using the
|
|
RANSAC algorithm.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="estimateAffine3D(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,double)">
|
|
<h3>estimateAffine3D</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">int</span> <span class="element-name">estimateAffine3D</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> src,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dst,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> out,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> inliers,
|
|
double ransacThreshold)</span></div>
|
|
<div class="block">Computes an optimal affine transformation between two 3D point sets.
|
|
|
|
It computes
|
|
\(
|
|
\begin{bmatrix}
|
|
x\\
|
|
y\\
|
|
z\\
|
|
\end{bmatrix}
|
|
=
|
|
\begin{bmatrix}
|
|
a_{11} & a_{12} & a_{13}\\
|
|
a_{21} & a_{22} & a_{23}\\
|
|
a_{31} & a_{32} & a_{33}\\
|
|
\end{bmatrix}
|
|
\begin{bmatrix}
|
|
X\\
|
|
Y\\
|
|
Z\\
|
|
\end{bmatrix}
|
|
+
|
|
\begin{bmatrix}
|
|
b_1\\
|
|
b_2\\
|
|
b_3\\
|
|
\end{bmatrix}
|
|
\)</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>src</code> - First input 3D point set containing \((X,Y,Z)\).</dd>
|
|
<dd><code>dst</code> - Second input 3D point set containing \((x,y,z)\).</dd>
|
|
<dd><code>out</code> - Output 3D affine transformation matrix \(3 \times 4\) of the form
|
|
\(
|
|
\begin{bmatrix}
|
|
a_{11} & a_{12} & a_{13} & b_1\\
|
|
a_{21} & a_{22} & a_{23} & b_2\\
|
|
a_{31} & a_{32} & a_{33} & b_3\\
|
|
\end{bmatrix}
|
|
\)</dd>
|
|
<dd><code>inliers</code> - Output vector indicating which points are inliers (1-inlier, 0-outlier).</dd>
|
|
<dd><code>ransacThreshold</code> - Maximum reprojection error in the RANSAC algorithm to consider a point as
|
|
an inlier.
|
|
between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation
|
|
significantly. Values lower than 0.8-0.9 can result in an incorrectly estimated transformation.
|
|
|
|
The function estimates an optimal 3D affine transformation between two 3D point sets using the
|
|
RANSAC algorithm.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="estimateAffine3D(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>estimateAffine3D</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">int</span> <span class="element-name">estimateAffine3D</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> src,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dst,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> out,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> inliers)</span></div>
|
|
<div class="block">Computes an optimal affine transformation between two 3D point sets.
|
|
|
|
It computes
|
|
\(
|
|
\begin{bmatrix}
|
|
x\\
|
|
y\\
|
|
z\\
|
|
\end{bmatrix}
|
|
=
|
|
\begin{bmatrix}
|
|
a_{11} & a_{12} & a_{13}\\
|
|
a_{21} & a_{22} & a_{23}\\
|
|
a_{31} & a_{32} & a_{33}\\
|
|
\end{bmatrix}
|
|
\begin{bmatrix}
|
|
X\\
|
|
Y\\
|
|
Z\\
|
|
\end{bmatrix}
|
|
+
|
|
\begin{bmatrix}
|
|
b_1\\
|
|
b_2\\
|
|
b_3\\
|
|
\end{bmatrix}
|
|
\)</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>src</code> - First input 3D point set containing \((X,Y,Z)\).</dd>
|
|
<dd><code>dst</code> - Second input 3D point set containing \((x,y,z)\).</dd>
|
|
<dd><code>out</code> - Output 3D affine transformation matrix \(3 \times 4\) of the form
|
|
\(
|
|
\begin{bmatrix}
|
|
a_{11} & a_{12} & a_{13} & b_1\\
|
|
a_{21} & a_{22} & a_{23} & b_2\\
|
|
a_{31} & a_{32} & a_{33} & b_3\\
|
|
\end{bmatrix}
|
|
\)</dd>
|
|
<dd><code>inliers</code> - Output vector indicating which points are inliers (1-inlier, 0-outlier).
|
|
an inlier.
|
|
between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation
|
|
significantly. Values lower than 0.8-0.9 can result in an incorrectly estimated transformation.
|
|
|
|
The function estimates an optimal 3D affine transformation between two 3D point sets using the
|
|
RANSAC algorithm.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="estimateAffine3D(org.opencv.core.Mat,org.opencv.core.Mat,double[],boolean)">
|
|
<h3>estimateAffine3D</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">estimateAffine3D</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> src,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dst,
|
|
double[] scale,
|
|
boolean force_rotation)</span></div>
|
|
<div class="block">Computes an optimal affine transformation between two 3D point sets.
|
|
|
|
It computes \(R,s,t\) minimizing \(\sum{i} dst_i - c \cdot R \cdot src_i \)
|
|
where \(R\) is a 3x3 rotation matrix, \(t\) is a 3x1 translation vector and \(s\) is a
|
|
scalar size value. This is an implementation of the algorithm by Umeyama \cite umeyama1991least .
|
|
The estimated affine transform has a homogeneous scale which is a subclass of affine
|
|
transformations with 7 degrees of freedom. The paired point sets need to comprise at least 3
|
|
points each.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>src</code> - First input 3D point set.</dd>
|
|
<dd><code>dst</code> - Second input 3D point set.</dd>
|
|
<dd><code>scale</code> - If null is passed, the scale parameter c will be assumed to be 1.0.
|
|
Else the pointed-to variable will be set to the optimal scale.</dd>
|
|
<dd><code>force_rotation</code> - If true, the returned rotation will never be a reflection.
|
|
This might be unwanted, e.g. when optimizing a transform between a right- and a
|
|
left-handed coordinate system.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>3D affine transformation matrix \(3 \times 4\) of the form
|
|
\(T =
|
|
\begin{bmatrix}
|
|
R & t\\
|
|
\end{bmatrix}
|
|
\)</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="estimateAffine3D(org.opencv.core.Mat,org.opencv.core.Mat,double[])">
|
|
<h3>estimateAffine3D</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">estimateAffine3D</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> src,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dst,
|
|
double[] scale)</span></div>
|
|
<div class="block">Computes an optimal affine transformation between two 3D point sets.
|
|
|
|
It computes \(R,s,t\) minimizing \(\sum{i} dst_i - c \cdot R \cdot src_i \)
|
|
where \(R\) is a 3x3 rotation matrix, \(t\) is a 3x1 translation vector and \(s\) is a
|
|
scalar size value. This is an implementation of the algorithm by Umeyama \cite umeyama1991least .
|
|
The estimated affine transform has a homogeneous scale which is a subclass of affine
|
|
transformations with 7 degrees of freedom. The paired point sets need to comprise at least 3
|
|
points each.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>src</code> - First input 3D point set.</dd>
|
|
<dd><code>dst</code> - Second input 3D point set.</dd>
|
|
<dd><code>scale</code> - If null is passed, the scale parameter c will be assumed to be 1.0.
|
|
Else the pointed-to variable will be set to the optimal scale.
|
|
This might be unwanted, e.g. when optimizing a transform between a right- and a
|
|
left-handed coordinate system.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>3D affine transformation matrix \(3 \times 4\) of the form
|
|
\(T =
|
|
\begin{bmatrix}
|
|
R & t\\
|
|
\end{bmatrix}
|
|
\)</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="estimateAffine3D(org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>estimateAffine3D</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">estimateAffine3D</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> src,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dst)</span></div>
|
|
<div class="block">Computes an optimal affine transformation between two 3D point sets.
|
|
|
|
It computes \(R,s,t\) minimizing \(\sum{i} dst_i - c \cdot R \cdot src_i \)
|
|
where \(R\) is a 3x3 rotation matrix, \(t\) is a 3x1 translation vector and \(s\) is a
|
|
scalar size value. This is an implementation of the algorithm by Umeyama \cite umeyama1991least .
|
|
The estimated affine transform has a homogeneous scale which is a subclass of affine
|
|
transformations with 7 degrees of freedom. The paired point sets need to comprise at least 3
|
|
points each.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>src</code> - First input 3D point set.</dd>
|
|
<dd><code>dst</code> - Second input 3D point set.
|
|
Else the pointed-to variable will be set to the optimal scale.
|
|
This might be unwanted, e.g. when optimizing a transform between a right- and a
|
|
left-handed coordinate system.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>3D affine transformation matrix \(3 \times 4\) of the form
|
|
\(T =
|
|
\begin{bmatrix}
|
|
R & t\\
|
|
\end{bmatrix}
|
|
\)</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="estimateTranslation3D(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,double,double)">
|
|
<h3>estimateTranslation3D</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">int</span> <span class="element-name">estimateTranslation3D</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> src,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dst,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> out,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> inliers,
|
|
double ransacThreshold,
|
|
double confidence)</span></div>
|
|
<div class="block">Computes an optimal translation between two 3D point sets.
|
|
|
|
It computes
|
|
\(
|
|
\begin{bmatrix}
|
|
x\\
|
|
y\\
|
|
z\\
|
|
\end{bmatrix}
|
|
=
|
|
\begin{bmatrix}
|
|
X\\
|
|
Y\\
|
|
Z\\
|
|
\end{bmatrix}
|
|
+
|
|
\begin{bmatrix}
|
|
b_1\\
|
|
b_2\\
|
|
b_3\\
|
|
\end{bmatrix}
|
|
\)</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>src</code> - First input 3D point set containing \((X,Y,Z)\).</dd>
|
|
<dd><code>dst</code> - Second input 3D point set containing \((x,y,z)\).</dd>
|
|
<dd><code>out</code> - Output 3D translation vector \(3 \times 1\) of the form
|
|
\(
|
|
\begin{bmatrix}
|
|
b_1 \\
|
|
b_2 \\
|
|
b_3 \\
|
|
\end{bmatrix}
|
|
\)</dd>
|
|
<dd><code>inliers</code> - Output vector indicating which points are inliers (1-inlier, 0-outlier).</dd>
|
|
<dd><code>ransacThreshold</code> - Maximum reprojection error in the RANSAC algorithm to consider a point as
|
|
an inlier.</dd>
|
|
<dd><code>confidence</code> - Confidence level, between 0 and 1, for the estimated transformation. Anything
|
|
between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation
|
|
significantly. Values lower than 0.8-0.9 can result in an incorrectly estimated transformation.
|
|
|
|
The function estimates an optimal 3D translation between two 3D point sets using the
|
|
RANSAC algorithm.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="estimateTranslation3D(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,double)">
|
|
<h3>estimateTranslation3D</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">int</span> <span class="element-name">estimateTranslation3D</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> src,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dst,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> out,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> inliers,
|
|
double ransacThreshold)</span></div>
|
|
<div class="block">Computes an optimal translation between two 3D point sets.
|
|
|
|
It computes
|
|
\(
|
|
\begin{bmatrix}
|
|
x\\
|
|
y\\
|
|
z\\
|
|
\end{bmatrix}
|
|
=
|
|
\begin{bmatrix}
|
|
X\\
|
|
Y\\
|
|
Z\\
|
|
\end{bmatrix}
|
|
+
|
|
\begin{bmatrix}
|
|
b_1\\
|
|
b_2\\
|
|
b_3\\
|
|
\end{bmatrix}
|
|
\)</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>src</code> - First input 3D point set containing \((X,Y,Z)\).</dd>
|
|
<dd><code>dst</code> - Second input 3D point set containing \((x,y,z)\).</dd>
|
|
<dd><code>out</code> - Output 3D translation vector \(3 \times 1\) of the form
|
|
\(
|
|
\begin{bmatrix}
|
|
b_1 \\
|
|
b_2 \\
|
|
b_3 \\
|
|
\end{bmatrix}
|
|
\)</dd>
|
|
<dd><code>inliers</code> - Output vector indicating which points are inliers (1-inlier, 0-outlier).</dd>
|
|
<dd><code>ransacThreshold</code> - Maximum reprojection error in the RANSAC algorithm to consider a point as
|
|
an inlier.
|
|
between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation
|
|
significantly. Values lower than 0.8-0.9 can result in an incorrectly estimated transformation.
|
|
|
|
The function estimates an optimal 3D translation between two 3D point sets using the
|
|
RANSAC algorithm.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="estimateTranslation3D(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>estimateTranslation3D</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">int</span> <span class="element-name">estimateTranslation3D</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> src,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dst,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> out,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> inliers)</span></div>
|
|
<div class="block">Computes an optimal translation between two 3D point sets.
|
|
|
|
It computes
|
|
\(
|
|
\begin{bmatrix}
|
|
x\\
|
|
y\\
|
|
z\\
|
|
\end{bmatrix}
|
|
=
|
|
\begin{bmatrix}
|
|
X\\
|
|
Y\\
|
|
Z\\
|
|
\end{bmatrix}
|
|
+
|
|
\begin{bmatrix}
|
|
b_1\\
|
|
b_2\\
|
|
b_3\\
|
|
\end{bmatrix}
|
|
\)</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>src</code> - First input 3D point set containing \((X,Y,Z)\).</dd>
|
|
<dd><code>dst</code> - Second input 3D point set containing \((x,y,z)\).</dd>
|
|
<dd><code>out</code> - Output 3D translation vector \(3 \times 1\) of the form
|
|
\(
|
|
\begin{bmatrix}
|
|
b_1 \\
|
|
b_2 \\
|
|
b_3 \\
|
|
\end{bmatrix}
|
|
\)</dd>
|
|
<dd><code>inliers</code> - Output vector indicating which points are inliers (1-inlier, 0-outlier).
|
|
an inlier.
|
|
between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation
|
|
significantly. Values lower than 0.8-0.9 can result in an incorrectly estimated transformation.
|
|
|
|
The function estimates an optimal 3D translation between two 3D point sets using the
|
|
RANSAC algorithm.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="estimateAffine2D(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,double,long,double,long)">
|
|
<h3>estimateAffine2D</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">estimateAffine2D</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> from,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> to,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> inliers,
|
|
int method,
|
|
double ransacReprojThreshold,
|
|
long maxIters,
|
|
double confidence,
|
|
long refineIters)</span></div>
|
|
<div class="block">Computes an optimal affine transformation between two 2D point sets.
|
|
|
|
It computes
|
|
\(
|
|
\begin{bmatrix}
|
|
x\\
|
|
y\\
|
|
\end{bmatrix}
|
|
=
|
|
\begin{bmatrix}
|
|
a_{11} & a_{12}\\
|
|
a_{21} & a_{22}\\
|
|
\end{bmatrix}
|
|
\begin{bmatrix}
|
|
X\\
|
|
Y\\
|
|
\end{bmatrix}
|
|
+
|
|
\begin{bmatrix}
|
|
b_1\\
|
|
b_2\\
|
|
\end{bmatrix}
|
|
\)</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>from</code> - First input 2D point set containing \((X,Y)\).</dd>
|
|
<dd><code>to</code> - Second input 2D point set containing \((x,y)\).</dd>
|
|
<dd><code>inliers</code> - Output vector indicating which points are inliers (1-inlier, 0-outlier).</dd>
|
|
<dd><code>method</code> - Robust method used to compute transformation. The following methods are possible:
|
|
<ul>
|
|
<li>
|
|
REF: RANSAC - RANSAC-based robust method
|
|
</li>
|
|
<li>
|
|
REF: LMEDS - Least-Median robust method
|
|
RANSAC is the default method.
|
|
</li>
|
|
</ul></dd>
|
|
<dd><code>ransacReprojThreshold</code> - Maximum reprojection error in the RANSAC algorithm to consider
|
|
a point as an inlier. Applies only to RANSAC.</dd>
|
|
<dd><code>maxIters</code> - The maximum number of robust method iterations.</dd>
|
|
<dd><code>confidence</code> - Confidence level, between 0 and 1, for the estimated transformation. Anything
|
|
between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation
|
|
significantly. Values lower than 0.8-0.9 can result in an incorrectly estimated transformation.</dd>
|
|
<dd><code>refineIters</code> - Maximum number of iterations of refining algorithm (Levenberg-Marquardt).
|
|
Passing 0 will disable refining, so the output matrix will be output of robust method.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>Output 2D affine transformation matrix \(2 \times 3\) or empty matrix if transformation
|
|
could not be estimated. The returned matrix has the following form:
|
|
\(
|
|
\begin{bmatrix}
|
|
a_{11} & a_{12} & b_1\\
|
|
a_{21} & a_{22} & b_2\\
|
|
\end{bmatrix}
|
|
\)
|
|
|
|
The function estimates an optimal 2D affine transformation between two 2D point sets using the
|
|
selected robust algorithm.
|
|
|
|
The computed transformation is then refined further (using only inliers) with the
|
|
Levenberg-Marquardt method to reduce the re-projection error even more.
|
|
|
|
<b>Note:</b>
|
|
The RANSAC method can handle practically any ratio of outliers but needs a threshold to
|
|
distinguish inliers from outliers. The method LMeDS does not need any threshold but it works
|
|
correctly only when there are more than 50% of inliers.
|
|
|
|
SEE: estimateAffinePartial2D, getAffineTransform</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="estimateAffine2D(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,double,long,double)">
|
|
<h3>estimateAffine2D</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">estimateAffine2D</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> from,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> to,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> inliers,
|
|
int method,
|
|
double ransacReprojThreshold,
|
|
long maxIters,
|
|
double confidence)</span></div>
|
|
<div class="block">Computes an optimal affine transformation between two 2D point sets.
|
|
|
|
It computes
|
|
\(
|
|
\begin{bmatrix}
|
|
x\\
|
|
y\\
|
|
\end{bmatrix}
|
|
=
|
|
\begin{bmatrix}
|
|
a_{11} & a_{12}\\
|
|
a_{21} & a_{22}\\
|
|
\end{bmatrix}
|
|
\begin{bmatrix}
|
|
X\\
|
|
Y\\
|
|
\end{bmatrix}
|
|
+
|
|
\begin{bmatrix}
|
|
b_1\\
|
|
b_2\\
|
|
\end{bmatrix}
|
|
\)</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>from</code> - First input 2D point set containing \((X,Y)\).</dd>
|
|
<dd><code>to</code> - Second input 2D point set containing \((x,y)\).</dd>
|
|
<dd><code>inliers</code> - Output vector indicating which points are inliers (1-inlier, 0-outlier).</dd>
|
|
<dd><code>method</code> - Robust method used to compute transformation. The following methods are possible:
|
|
<ul>
|
|
<li>
|
|
REF: RANSAC - RANSAC-based robust method
|
|
</li>
|
|
<li>
|
|
REF: LMEDS - Least-Median robust method
|
|
RANSAC is the default method.
|
|
</li>
|
|
</ul></dd>
|
|
<dd><code>ransacReprojThreshold</code> - Maximum reprojection error in the RANSAC algorithm to consider
|
|
a point as an inlier. Applies only to RANSAC.</dd>
|
|
<dd><code>maxIters</code> - The maximum number of robust method iterations.</dd>
|
|
<dd><code>confidence</code> - Confidence level, between 0 and 1, for the estimated transformation. Anything
|
|
between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation
|
|
significantly. Values lower than 0.8-0.9 can result in an incorrectly estimated transformation.
|
|
Passing 0 will disable refining, so the output matrix will be output of robust method.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>Output 2D affine transformation matrix \(2 \times 3\) or empty matrix if transformation
|
|
could not be estimated. The returned matrix has the following form:
|
|
\(
|
|
\begin{bmatrix}
|
|
a_{11} & a_{12} & b_1\\
|
|
a_{21} & a_{22} & b_2\\
|
|
\end{bmatrix}
|
|
\)
|
|
|
|
The function estimates an optimal 2D affine transformation between two 2D point sets using the
|
|
selected robust algorithm.
|
|
|
|
The computed transformation is then refined further (using only inliers) with the
|
|
Levenberg-Marquardt method to reduce the re-projection error even more.
|
|
|
|
<b>Note:</b>
|
|
The RANSAC method can handle practically any ratio of outliers but needs a threshold to
|
|
distinguish inliers from outliers. The method LMeDS does not need any threshold but it works
|
|
correctly only when there are more than 50% of inliers.
|
|
|
|
SEE: estimateAffinePartial2D, getAffineTransform</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="estimateAffine2D(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,double,long)">
|
|
<h3>estimateAffine2D</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">estimateAffine2D</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> from,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> to,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> inliers,
|
|
int method,
|
|
double ransacReprojThreshold,
|
|
long maxIters)</span></div>
|
|
<div class="block">Computes an optimal affine transformation between two 2D point sets.
|
|
|
|
It computes
|
|
\(
|
|
\begin{bmatrix}
|
|
x\\
|
|
y\\
|
|
\end{bmatrix}
|
|
=
|
|
\begin{bmatrix}
|
|
a_{11} & a_{12}\\
|
|
a_{21} & a_{22}\\
|
|
\end{bmatrix}
|
|
\begin{bmatrix}
|
|
X\\
|
|
Y\\
|
|
\end{bmatrix}
|
|
+
|
|
\begin{bmatrix}
|
|
b_1\\
|
|
b_2\\
|
|
\end{bmatrix}
|
|
\)</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>from</code> - First input 2D point set containing \((X,Y)\).</dd>
|
|
<dd><code>to</code> - Second input 2D point set containing \((x,y)\).</dd>
|
|
<dd><code>inliers</code> - Output vector indicating which points are inliers (1-inlier, 0-outlier).</dd>
|
|
<dd><code>method</code> - Robust method used to compute transformation. The following methods are possible:
|
|
<ul>
|
|
<li>
|
|
REF: RANSAC - RANSAC-based robust method
|
|
</li>
|
|
<li>
|
|
REF: LMEDS - Least-Median robust method
|
|
RANSAC is the default method.
|
|
</li>
|
|
</ul></dd>
|
|
<dd><code>ransacReprojThreshold</code> - Maximum reprojection error in the RANSAC algorithm to consider
|
|
a point as an inlier. Applies only to RANSAC.</dd>
|
|
<dd><code>maxIters</code> - The maximum number of robust method iterations.
|
|
between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation
|
|
significantly. Values lower than 0.8-0.9 can result in an incorrectly estimated transformation.
|
|
Passing 0 will disable refining, so the output matrix will be output of robust method.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>Output 2D affine transformation matrix \(2 \times 3\) or empty matrix if transformation
|
|
could not be estimated. The returned matrix has the following form:
|
|
\(
|
|
\begin{bmatrix}
|
|
a_{11} & a_{12} & b_1\\
|
|
a_{21} & a_{22} & b_2\\
|
|
\end{bmatrix}
|
|
\)
|
|
|
|
The function estimates an optimal 2D affine transformation between two 2D point sets using the
|
|
selected robust algorithm.
|
|
|
|
The computed transformation is then refined further (using only inliers) with the
|
|
Levenberg-Marquardt method to reduce the re-projection error even more.
|
|
|
|
<b>Note:</b>
|
|
The RANSAC method can handle practically any ratio of outliers but needs a threshold to
|
|
distinguish inliers from outliers. The method LMeDS does not need any threshold but it works
|
|
correctly only when there are more than 50% of inliers.
|
|
|
|
SEE: estimateAffinePartial2D, getAffineTransform</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="estimateAffine2D(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,double)">
|
|
<h3>estimateAffine2D</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">estimateAffine2D</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> from,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> to,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> inliers,
|
|
int method,
|
|
double ransacReprojThreshold)</span></div>
|
|
<div class="block">Computes an optimal affine transformation between two 2D point sets.
|
|
|
|
It computes
|
|
\(
|
|
\begin{bmatrix}
|
|
x\\
|
|
y\\
|
|
\end{bmatrix}
|
|
=
|
|
\begin{bmatrix}
|
|
a_{11} & a_{12}\\
|
|
a_{21} & a_{22}\\
|
|
\end{bmatrix}
|
|
\begin{bmatrix}
|
|
X\\
|
|
Y\\
|
|
\end{bmatrix}
|
|
+
|
|
\begin{bmatrix}
|
|
b_1\\
|
|
b_2\\
|
|
\end{bmatrix}
|
|
\)</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>from</code> - First input 2D point set containing \((X,Y)\).</dd>
|
|
<dd><code>to</code> - Second input 2D point set containing \((x,y)\).</dd>
|
|
<dd><code>inliers</code> - Output vector indicating which points are inliers (1-inlier, 0-outlier).</dd>
|
|
<dd><code>method</code> - Robust method used to compute transformation. The following methods are possible:
|
|
<ul>
|
|
<li>
|
|
REF: RANSAC - RANSAC-based robust method
|
|
</li>
|
|
<li>
|
|
REF: LMEDS - Least-Median robust method
|
|
RANSAC is the default method.
|
|
</li>
|
|
</ul></dd>
|
|
<dd><code>ransacReprojThreshold</code> - Maximum reprojection error in the RANSAC algorithm to consider
|
|
a point as an inlier. Applies only to RANSAC.
|
|
between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation
|
|
significantly. Values lower than 0.8-0.9 can result in an incorrectly estimated transformation.
|
|
Passing 0 will disable refining, so the output matrix will be output of robust method.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>Output 2D affine transformation matrix \(2 \times 3\) or empty matrix if transformation
|
|
could not be estimated. The returned matrix has the following form:
|
|
\(
|
|
\begin{bmatrix}
|
|
a_{11} & a_{12} & b_1\\
|
|
a_{21} & a_{22} & b_2\\
|
|
\end{bmatrix}
|
|
\)
|
|
|
|
The function estimates an optimal 2D affine transformation between two 2D point sets using the
|
|
selected robust algorithm.
|
|
|
|
The computed transformation is then refined further (using only inliers) with the
|
|
Levenberg-Marquardt method to reduce the re-projection error even more.
|
|
|
|
<b>Note:</b>
|
|
The RANSAC method can handle practically any ratio of outliers but needs a threshold to
|
|
distinguish inliers from outliers. The method LMeDS does not need any threshold but it works
|
|
correctly only when there are more than 50% of inliers.
|
|
|
|
SEE: estimateAffinePartial2D, getAffineTransform</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="estimateAffine2D(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int)">
|
|
<h3>estimateAffine2D</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">estimateAffine2D</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> from,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> to,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> inliers,
|
|
int method)</span></div>
|
|
<div class="block">Computes an optimal affine transformation between two 2D point sets.
|
|
|
|
It computes
|
|
\(
|
|
\begin{bmatrix}
|
|
x\\
|
|
y\\
|
|
\end{bmatrix}
|
|
=
|
|
\begin{bmatrix}
|
|
a_{11} & a_{12}\\
|
|
a_{21} & a_{22}\\
|
|
\end{bmatrix}
|
|
\begin{bmatrix}
|
|
X\\
|
|
Y\\
|
|
\end{bmatrix}
|
|
+
|
|
\begin{bmatrix}
|
|
b_1\\
|
|
b_2\\
|
|
\end{bmatrix}
|
|
\)</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>from</code> - First input 2D point set containing \((X,Y)\).</dd>
|
|
<dd><code>to</code> - Second input 2D point set containing \((x,y)\).</dd>
|
|
<dd><code>inliers</code> - Output vector indicating which points are inliers (1-inlier, 0-outlier).</dd>
|
|
<dd><code>method</code> - Robust method used to compute transformation. The following methods are possible:
|
|
<ul>
|
|
<li>
|
|
REF: RANSAC - RANSAC-based robust method
|
|
</li>
|
|
<li>
|
|
REF: LMEDS - Least-Median robust method
|
|
RANSAC is the default method.
|
|
</li>
|
|
</ul>
|
|
a point as an inlier. Applies only to RANSAC.
|
|
between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation
|
|
significantly. Values lower than 0.8-0.9 can result in an incorrectly estimated transformation.
|
|
Passing 0 will disable refining, so the output matrix will be output of robust method.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>Output 2D affine transformation matrix \(2 \times 3\) or empty matrix if transformation
|
|
could not be estimated. The returned matrix has the following form:
|
|
\(
|
|
\begin{bmatrix}
|
|
a_{11} & a_{12} & b_1\\
|
|
a_{21} & a_{22} & b_2\\
|
|
\end{bmatrix}
|
|
\)
|
|
|
|
The function estimates an optimal 2D affine transformation between two 2D point sets using the
|
|
selected robust algorithm.
|
|
|
|
The computed transformation is then refined further (using only inliers) with the
|
|
Levenberg-Marquardt method to reduce the re-projection error even more.
|
|
|
|
<b>Note:</b>
|
|
The RANSAC method can handle practically any ratio of outliers but needs a threshold to
|
|
distinguish inliers from outliers. The method LMeDS does not need any threshold but it works
|
|
correctly only when there are more than 50% of inliers.
|
|
|
|
SEE: estimateAffinePartial2D, getAffineTransform</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="estimateAffine2D(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>estimateAffine2D</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">estimateAffine2D</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> from,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> to,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> inliers)</span></div>
|
|
<div class="block">Computes an optimal affine transformation between two 2D point sets.
|
|
|
|
It computes
|
|
\(
|
|
\begin{bmatrix}
|
|
x\\
|
|
y\\
|
|
\end{bmatrix}
|
|
=
|
|
\begin{bmatrix}
|
|
a_{11} & a_{12}\\
|
|
a_{21} & a_{22}\\
|
|
\end{bmatrix}
|
|
\begin{bmatrix}
|
|
X\\
|
|
Y\\
|
|
\end{bmatrix}
|
|
+
|
|
\begin{bmatrix}
|
|
b_1\\
|
|
b_2\\
|
|
\end{bmatrix}
|
|
\)</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>from</code> - First input 2D point set containing \((X,Y)\).</dd>
|
|
<dd><code>to</code> - Second input 2D point set containing \((x,y)\).</dd>
|
|
<dd><code>inliers</code> - Output vector indicating which points are inliers (1-inlier, 0-outlier).
|
|
<ul>
|
|
<li>
|
|
REF: RANSAC - RANSAC-based robust method
|
|
</li>
|
|
<li>
|
|
REF: LMEDS - Least-Median robust method
|
|
RANSAC is the default method.
|
|
</li>
|
|
</ul>
|
|
a point as an inlier. Applies only to RANSAC.
|
|
between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation
|
|
significantly. Values lower than 0.8-0.9 can result in an incorrectly estimated transformation.
|
|
Passing 0 will disable refining, so the output matrix will be output of robust method.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>Output 2D affine transformation matrix \(2 \times 3\) or empty matrix if transformation
|
|
could not be estimated. The returned matrix has the following form:
|
|
\(
|
|
\begin{bmatrix}
|
|
a_{11} & a_{12} & b_1\\
|
|
a_{21} & a_{22} & b_2\\
|
|
\end{bmatrix}
|
|
\)
|
|
|
|
The function estimates an optimal 2D affine transformation between two 2D point sets using the
|
|
selected robust algorithm.
|
|
|
|
The computed transformation is then refined further (using only inliers) with the
|
|
Levenberg-Marquardt method to reduce the re-projection error even more.
|
|
|
|
<b>Note:</b>
|
|
The RANSAC method can handle practically any ratio of outliers but needs a threshold to
|
|
distinguish inliers from outliers. The method LMeDS does not need any threshold but it works
|
|
correctly only when there are more than 50% of inliers.
|
|
|
|
SEE: estimateAffinePartial2D, getAffineTransform</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="estimateAffine2D(org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>estimateAffine2D</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">estimateAffine2D</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> from,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> to)</span></div>
|
|
<div class="block">Computes an optimal affine transformation between two 2D point sets.
|
|
|
|
It computes
|
|
\(
|
|
\begin{bmatrix}
|
|
x\\
|
|
y\\
|
|
\end{bmatrix}
|
|
=
|
|
\begin{bmatrix}
|
|
a_{11} & a_{12}\\
|
|
a_{21} & a_{22}\\
|
|
\end{bmatrix}
|
|
\begin{bmatrix}
|
|
X\\
|
|
Y\\
|
|
\end{bmatrix}
|
|
+
|
|
\begin{bmatrix}
|
|
b_1\\
|
|
b_2\\
|
|
\end{bmatrix}
|
|
\)</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>from</code> - First input 2D point set containing \((X,Y)\).</dd>
|
|
<dd><code>to</code> - Second input 2D point set containing \((x,y)\).
|
|
<ul>
|
|
<li>
|
|
REF: RANSAC - RANSAC-based robust method
|
|
</li>
|
|
<li>
|
|
REF: LMEDS - Least-Median robust method
|
|
RANSAC is the default method.
|
|
</li>
|
|
</ul>
|
|
a point as an inlier. Applies only to RANSAC.
|
|
between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation
|
|
significantly. Values lower than 0.8-0.9 can result in an incorrectly estimated transformation.
|
|
Passing 0 will disable refining, so the output matrix will be output of robust method.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>Output 2D affine transformation matrix \(2 \times 3\) or empty matrix if transformation
|
|
could not be estimated. The returned matrix has the following form:
|
|
\(
|
|
\begin{bmatrix}
|
|
a_{11} & a_{12} & b_1\\
|
|
a_{21} & a_{22} & b_2\\
|
|
\end{bmatrix}
|
|
\)
|
|
|
|
The function estimates an optimal 2D affine transformation between two 2D point sets using the
|
|
selected robust algorithm.
|
|
|
|
The computed transformation is then refined further (using only inliers) with the
|
|
Levenberg-Marquardt method to reduce the re-projection error even more.
|
|
|
|
<b>Note:</b>
|
|
The RANSAC method can handle practically any ratio of outliers but needs a threshold to
|
|
distinguish inliers from outliers. The method LMeDS does not need any threshold but it works
|
|
correctly only when there are more than 50% of inliers.
|
|
|
|
SEE: estimateAffinePartial2D, getAffineTransform</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="estimateAffine2D(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.calib3d.UsacParams)">
|
|
<h3>estimateAffine2D</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">estimateAffine2D</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> pts1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> pts2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> inliers,
|
|
<a href="UsacParams.html" title="class in org.opencv.calib3d">UsacParams</a> params)</span></div>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="estimateAffinePartial2D(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,double,long,double,long)">
|
|
<h3>estimateAffinePartial2D</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">estimateAffinePartial2D</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> from,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> to,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> inliers,
|
|
int method,
|
|
double ransacReprojThreshold,
|
|
long maxIters,
|
|
double confidence,
|
|
long refineIters)</span></div>
|
|
<div class="block">Computes an optimal limited affine transformation with 4 degrees of freedom between
|
|
two 2D point sets.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>from</code> - First input 2D point set.</dd>
|
|
<dd><code>to</code> - Second input 2D point set.</dd>
|
|
<dd><code>inliers</code> - Output vector indicating which points are inliers.</dd>
|
|
<dd><code>method</code> - Robust method used to compute transformation. The following methods are possible:
|
|
<ul>
|
|
<li>
|
|
REF: RANSAC - RANSAC-based robust method
|
|
</li>
|
|
<li>
|
|
REF: LMEDS - Least-Median robust method
|
|
RANSAC is the default method.
|
|
</li>
|
|
</ul></dd>
|
|
<dd><code>ransacReprojThreshold</code> - Maximum reprojection error in the RANSAC algorithm to consider
|
|
a point as an inlier. Applies only to RANSAC.</dd>
|
|
<dd><code>maxIters</code> - The maximum number of robust method iterations.</dd>
|
|
<dd><code>confidence</code> - Confidence level, between 0 and 1, for the estimated transformation. Anything
|
|
between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation
|
|
significantly. Values lower than 0.8-0.9 can result in an incorrectly estimated transformation.</dd>
|
|
<dd><code>refineIters</code> - Maximum number of iterations of refining algorithm (Levenberg-Marquardt).
|
|
Passing 0 will disable refining, so the output matrix will be output of robust method.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>Output 2D affine transformation (4 degrees of freedom) matrix \(2 \times 3\) or
|
|
empty matrix if transformation could not be estimated.
|
|
|
|
The function estimates an optimal 2D affine transformation with 4 degrees of freedom limited to
|
|
combinations of translation, rotation, and uniform scaling. Uses the selected algorithm for robust
|
|
estimation.
|
|
|
|
The computed transformation is then refined further (using only inliers) with the
|
|
Levenberg-Marquardt method to reduce the re-projection error even more.
|
|
|
|
Estimated transformation matrix is:
|
|
\( \begin{bmatrix} \cos(\theta) \cdot s & -\sin(\theta) \cdot s & t_x \\
|
|
\sin(\theta) \cdot s & \cos(\theta) \cdot s & t_y
|
|
\end{bmatrix} \)
|
|
Where \( \theta \) is the rotation angle, \( s \) the scaling factor and \( t_x, t_y \) are
|
|
translations in \( x, y \) axes respectively.
|
|
|
|
<b>Note:</b>
|
|
The RANSAC method can handle practically any ratio of outliers but need a threshold to
|
|
distinguish inliers from outliers. The method LMeDS does not need any threshold but it works
|
|
correctly only when there are more than 50% of inliers.
|
|
|
|
SEE: estimateAffine2D, getAffineTransform</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="estimateAffinePartial2D(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,double,long,double)">
|
|
<h3>estimateAffinePartial2D</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">estimateAffinePartial2D</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> from,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> to,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> inliers,
|
|
int method,
|
|
double ransacReprojThreshold,
|
|
long maxIters,
|
|
double confidence)</span></div>
|
|
<div class="block">Computes an optimal limited affine transformation with 4 degrees of freedom between
|
|
two 2D point sets.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>from</code> - First input 2D point set.</dd>
|
|
<dd><code>to</code> - Second input 2D point set.</dd>
|
|
<dd><code>inliers</code> - Output vector indicating which points are inliers.</dd>
|
|
<dd><code>method</code> - Robust method used to compute transformation. The following methods are possible:
|
|
<ul>
|
|
<li>
|
|
REF: RANSAC - RANSAC-based robust method
|
|
</li>
|
|
<li>
|
|
REF: LMEDS - Least-Median robust method
|
|
RANSAC is the default method.
|
|
</li>
|
|
</ul></dd>
|
|
<dd><code>ransacReprojThreshold</code> - Maximum reprojection error in the RANSAC algorithm to consider
|
|
a point as an inlier. Applies only to RANSAC.</dd>
|
|
<dd><code>maxIters</code> - The maximum number of robust method iterations.</dd>
|
|
<dd><code>confidence</code> - Confidence level, between 0 and 1, for the estimated transformation. Anything
|
|
between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation
|
|
significantly. Values lower than 0.8-0.9 can result in an incorrectly estimated transformation.
|
|
Passing 0 will disable refining, so the output matrix will be output of robust method.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>Output 2D affine transformation (4 degrees of freedom) matrix \(2 \times 3\) or
|
|
empty matrix if transformation could not be estimated.
|
|
|
|
The function estimates an optimal 2D affine transformation with 4 degrees of freedom limited to
|
|
combinations of translation, rotation, and uniform scaling. Uses the selected algorithm for robust
|
|
estimation.
|
|
|
|
The computed transformation is then refined further (using only inliers) with the
|
|
Levenberg-Marquardt method to reduce the re-projection error even more.
|
|
|
|
Estimated transformation matrix is:
|
|
\( \begin{bmatrix} \cos(\theta) \cdot s & -\sin(\theta) \cdot s & t_x \\
|
|
\sin(\theta) \cdot s & \cos(\theta) \cdot s & t_y
|
|
\end{bmatrix} \)
|
|
Where \( \theta \) is the rotation angle, \( s \) the scaling factor and \( t_x, t_y \) are
|
|
translations in \( x, y \) axes respectively.
|
|
|
|
<b>Note:</b>
|
|
The RANSAC method can handle practically any ratio of outliers but need a threshold to
|
|
distinguish inliers from outliers. The method LMeDS does not need any threshold but it works
|
|
correctly only when there are more than 50% of inliers.
|
|
|
|
SEE: estimateAffine2D, getAffineTransform</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="estimateAffinePartial2D(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,double,long)">
|
|
<h3>estimateAffinePartial2D</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">estimateAffinePartial2D</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> from,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> to,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> inliers,
|
|
int method,
|
|
double ransacReprojThreshold,
|
|
long maxIters)</span></div>
|
|
<div class="block">Computes an optimal limited affine transformation with 4 degrees of freedom between
|
|
two 2D point sets.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>from</code> - First input 2D point set.</dd>
|
|
<dd><code>to</code> - Second input 2D point set.</dd>
|
|
<dd><code>inliers</code> - Output vector indicating which points are inliers.</dd>
|
|
<dd><code>method</code> - Robust method used to compute transformation. The following methods are possible:
|
|
<ul>
|
|
<li>
|
|
REF: RANSAC - RANSAC-based robust method
|
|
</li>
|
|
<li>
|
|
REF: LMEDS - Least-Median robust method
|
|
RANSAC is the default method.
|
|
</li>
|
|
</ul></dd>
|
|
<dd><code>ransacReprojThreshold</code> - Maximum reprojection error in the RANSAC algorithm to consider
|
|
a point as an inlier. Applies only to RANSAC.</dd>
|
|
<dd><code>maxIters</code> - The maximum number of robust method iterations.
|
|
between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation
|
|
significantly. Values lower than 0.8-0.9 can result in an incorrectly estimated transformation.
|
|
Passing 0 will disable refining, so the output matrix will be output of robust method.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>Output 2D affine transformation (4 degrees of freedom) matrix \(2 \times 3\) or
|
|
empty matrix if transformation could not be estimated.
|
|
|
|
The function estimates an optimal 2D affine transformation with 4 degrees of freedom limited to
|
|
combinations of translation, rotation, and uniform scaling. Uses the selected algorithm for robust
|
|
estimation.
|
|
|
|
The computed transformation is then refined further (using only inliers) with the
|
|
Levenberg-Marquardt method to reduce the re-projection error even more.
|
|
|
|
Estimated transformation matrix is:
|
|
\( \begin{bmatrix} \cos(\theta) \cdot s & -\sin(\theta) \cdot s & t_x \\
|
|
\sin(\theta) \cdot s & \cos(\theta) \cdot s & t_y
|
|
\end{bmatrix} \)
|
|
Where \( \theta \) is the rotation angle, \( s \) the scaling factor and \( t_x, t_y \) are
|
|
translations in \( x, y \) axes respectively.
|
|
|
|
<b>Note:</b>
|
|
The RANSAC method can handle practically any ratio of outliers but need a threshold to
|
|
distinguish inliers from outliers. The method LMeDS does not need any threshold but it works
|
|
correctly only when there are more than 50% of inliers.
|
|
|
|
SEE: estimateAffine2D, getAffineTransform</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="estimateAffinePartial2D(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,double)">
|
|
<h3>estimateAffinePartial2D</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">estimateAffinePartial2D</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> from,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> to,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> inliers,
|
|
int method,
|
|
double ransacReprojThreshold)</span></div>
|
|
<div class="block">Computes an optimal limited affine transformation with 4 degrees of freedom between
|
|
two 2D point sets.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>from</code> - First input 2D point set.</dd>
|
|
<dd><code>to</code> - Second input 2D point set.</dd>
|
|
<dd><code>inliers</code> - Output vector indicating which points are inliers.</dd>
|
|
<dd><code>method</code> - Robust method used to compute transformation. The following methods are possible:
|
|
<ul>
|
|
<li>
|
|
REF: RANSAC - RANSAC-based robust method
|
|
</li>
|
|
<li>
|
|
REF: LMEDS - Least-Median robust method
|
|
RANSAC is the default method.
|
|
</li>
|
|
</ul></dd>
|
|
<dd><code>ransacReprojThreshold</code> - Maximum reprojection error in the RANSAC algorithm to consider
|
|
a point as an inlier. Applies only to RANSAC.
|
|
between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation
|
|
significantly. Values lower than 0.8-0.9 can result in an incorrectly estimated transformation.
|
|
Passing 0 will disable refining, so the output matrix will be output of robust method.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>Output 2D affine transformation (4 degrees of freedom) matrix \(2 \times 3\) or
|
|
empty matrix if transformation could not be estimated.
|
|
|
|
The function estimates an optimal 2D affine transformation with 4 degrees of freedom limited to
|
|
combinations of translation, rotation, and uniform scaling. Uses the selected algorithm for robust
|
|
estimation.
|
|
|
|
The computed transformation is then refined further (using only inliers) with the
|
|
Levenberg-Marquardt method to reduce the re-projection error even more.
|
|
|
|
Estimated transformation matrix is:
|
|
\( \begin{bmatrix} \cos(\theta) \cdot s & -\sin(\theta) \cdot s & t_x \\
|
|
\sin(\theta) \cdot s & \cos(\theta) \cdot s & t_y
|
|
\end{bmatrix} \)
|
|
Where \( \theta \) is the rotation angle, \( s \) the scaling factor and \( t_x, t_y \) are
|
|
translations in \( x, y \) axes respectively.
|
|
|
|
<b>Note:</b>
|
|
The RANSAC method can handle practically any ratio of outliers but need a threshold to
|
|
distinguish inliers from outliers. The method LMeDS does not need any threshold but it works
|
|
correctly only when there are more than 50% of inliers.
|
|
|
|
SEE: estimateAffine2D, getAffineTransform</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="estimateAffinePartial2D(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int)">
|
|
<h3>estimateAffinePartial2D</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">estimateAffinePartial2D</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> from,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> to,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> inliers,
|
|
int method)</span></div>
|
|
<div class="block">Computes an optimal limited affine transformation with 4 degrees of freedom between
|
|
two 2D point sets.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>from</code> - First input 2D point set.</dd>
|
|
<dd><code>to</code> - Second input 2D point set.</dd>
|
|
<dd><code>inliers</code> - Output vector indicating which points are inliers.</dd>
|
|
<dd><code>method</code> - Robust method used to compute transformation. The following methods are possible:
|
|
<ul>
|
|
<li>
|
|
REF: RANSAC - RANSAC-based robust method
|
|
</li>
|
|
<li>
|
|
REF: LMEDS - Least-Median robust method
|
|
RANSAC is the default method.
|
|
</li>
|
|
</ul>
|
|
a point as an inlier. Applies only to RANSAC.
|
|
between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation
|
|
significantly. Values lower than 0.8-0.9 can result in an incorrectly estimated transformation.
|
|
Passing 0 will disable refining, so the output matrix will be output of robust method.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>Output 2D affine transformation (4 degrees of freedom) matrix \(2 \times 3\) or
|
|
empty matrix if transformation could not be estimated.
|
|
|
|
The function estimates an optimal 2D affine transformation with 4 degrees of freedom limited to
|
|
combinations of translation, rotation, and uniform scaling. Uses the selected algorithm for robust
|
|
estimation.
|
|
|
|
The computed transformation is then refined further (using only inliers) with the
|
|
Levenberg-Marquardt method to reduce the re-projection error even more.
|
|
|
|
Estimated transformation matrix is:
|
|
\( \begin{bmatrix} \cos(\theta) \cdot s & -\sin(\theta) \cdot s & t_x \\
|
|
\sin(\theta) \cdot s & \cos(\theta) \cdot s & t_y
|
|
\end{bmatrix} \)
|
|
Where \( \theta \) is the rotation angle, \( s \) the scaling factor and \( t_x, t_y \) are
|
|
translations in \( x, y \) axes respectively.
|
|
|
|
<b>Note:</b>
|
|
The RANSAC method can handle practically any ratio of outliers but need a threshold to
|
|
distinguish inliers from outliers. The method LMeDS does not need any threshold but it works
|
|
correctly only when there are more than 50% of inliers.
|
|
|
|
SEE: estimateAffine2D, getAffineTransform</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="estimateAffinePartial2D(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>estimateAffinePartial2D</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">estimateAffinePartial2D</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> from,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> to,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> inliers)</span></div>
|
|
<div class="block">Computes an optimal limited affine transformation with 4 degrees of freedom between
|
|
two 2D point sets.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>from</code> - First input 2D point set.</dd>
|
|
<dd><code>to</code> - Second input 2D point set.</dd>
|
|
<dd><code>inliers</code> - Output vector indicating which points are inliers.
|
|
<ul>
|
|
<li>
|
|
REF: RANSAC - RANSAC-based robust method
|
|
</li>
|
|
<li>
|
|
REF: LMEDS - Least-Median robust method
|
|
RANSAC is the default method.
|
|
</li>
|
|
</ul>
|
|
a point as an inlier. Applies only to RANSAC.
|
|
between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation
|
|
significantly. Values lower than 0.8-0.9 can result in an incorrectly estimated transformation.
|
|
Passing 0 will disable refining, so the output matrix will be output of robust method.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>Output 2D affine transformation (4 degrees of freedom) matrix \(2 \times 3\) or
|
|
empty matrix if transformation could not be estimated.
|
|
|
|
The function estimates an optimal 2D affine transformation with 4 degrees of freedom limited to
|
|
combinations of translation, rotation, and uniform scaling. Uses the selected algorithm for robust
|
|
estimation.
|
|
|
|
The computed transformation is then refined further (using only inliers) with the
|
|
Levenberg-Marquardt method to reduce the re-projection error even more.
|
|
|
|
Estimated transformation matrix is:
|
|
\( \begin{bmatrix} \cos(\theta) \cdot s & -\sin(\theta) \cdot s & t_x \\
|
|
\sin(\theta) \cdot s & \cos(\theta) \cdot s & t_y
|
|
\end{bmatrix} \)
|
|
Where \( \theta \) is the rotation angle, \( s \) the scaling factor and \( t_x, t_y \) are
|
|
translations in \( x, y \) axes respectively.
|
|
|
|
<b>Note:</b>
|
|
The RANSAC method can handle practically any ratio of outliers but need a threshold to
|
|
distinguish inliers from outliers. The method LMeDS does not need any threshold but it works
|
|
correctly only when there are more than 50% of inliers.
|
|
|
|
SEE: estimateAffine2D, getAffineTransform</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="estimateAffinePartial2D(org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>estimateAffinePartial2D</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">estimateAffinePartial2D</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> from,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> to)</span></div>
|
|
<div class="block">Computes an optimal limited affine transformation with 4 degrees of freedom between
|
|
two 2D point sets.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>from</code> - First input 2D point set.</dd>
|
|
<dd><code>to</code> - Second input 2D point set.
|
|
<ul>
|
|
<li>
|
|
REF: RANSAC - RANSAC-based robust method
|
|
</li>
|
|
<li>
|
|
REF: LMEDS - Least-Median robust method
|
|
RANSAC is the default method.
|
|
</li>
|
|
</ul>
|
|
a point as an inlier. Applies only to RANSAC.
|
|
between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation
|
|
significantly. Values lower than 0.8-0.9 can result in an incorrectly estimated transformation.
|
|
Passing 0 will disable refining, so the output matrix will be output of robust method.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>Output 2D affine transformation (4 degrees of freedom) matrix \(2 \times 3\) or
|
|
empty matrix if transformation could not be estimated.
|
|
|
|
The function estimates an optimal 2D affine transformation with 4 degrees of freedom limited to
|
|
combinations of translation, rotation, and uniform scaling. Uses the selected algorithm for robust
|
|
estimation.
|
|
|
|
The computed transformation is then refined further (using only inliers) with the
|
|
Levenberg-Marquardt method to reduce the re-projection error even more.
|
|
|
|
Estimated transformation matrix is:
|
|
\( \begin{bmatrix} \cos(\theta) \cdot s & -\sin(\theta) \cdot s & t_x \\
|
|
\sin(\theta) \cdot s & \cos(\theta) \cdot s & t_y
|
|
\end{bmatrix} \)
|
|
Where \( \theta \) is the rotation angle, \( s \) the scaling factor and \( t_x, t_y \) are
|
|
translations in \( x, y \) axes respectively.
|
|
|
|
<b>Note:</b>
|
|
The RANSAC method can handle practically any ratio of outliers but need a threshold to
|
|
distinguish inliers from outliers. The method LMeDS does not need any threshold but it works
|
|
correctly only when there are more than 50% of inliers.
|
|
|
|
SEE: estimateAffine2D, getAffineTransform</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="decomposeHomographyMat(org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List,java.util.List)">
|
|
<h3>decomposeHomographyMat</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">int</span> <span class="element-name">decomposeHomographyMat</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> H,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rotations,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> translations,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> normals)</span></div>
|
|
<div class="block">Decompose a homography matrix to rotation(s), translation(s) and plane normal(s).</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>H</code> - The input homography matrix between two images.</dd>
|
|
<dd><code>K</code> - The input camera intrinsic matrix.</dd>
|
|
<dd><code>rotations</code> - Array of rotation matrices.</dd>
|
|
<dd><code>translations</code> - Array of translation matrices.</dd>
|
|
<dd><code>normals</code> - Array of plane normal matrices.
|
|
|
|
This function extracts relative camera motion between two views of a planar object and returns up to
|
|
four mathematical solution tuples of rotation, translation, and plane normal. The decomposition of
|
|
the homography matrix H is described in detail in CITE: Malis2007.
|
|
|
|
If the homography H, induced by the plane, gives the constraint
|
|
\(s_i \vecthree{x'_i}{y'_i}{1} \sim H \vecthree{x_i}{y_i}{1}\) on the source image points
|
|
\(p_i\) and the destination image points \(p'_i\), then the tuple of rotations[k] and
|
|
translations[k] is a change of basis from the source camera's coordinate system to the destination
|
|
camera's coordinate system. However, by decomposing H, one can only get the translation normalized
|
|
by the (typically unknown) depth of the scene, i.e. its direction but with normalized length.
|
|
|
|
If point correspondences are available, at least two solutions may further be invalidated, by
|
|
applying positive depth constraint, i.e. all points must be in front of the camera.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="filterHomographyDecompByVisibleRefpoints(java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>filterHomographyDecompByVisibleRefpoints</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">filterHomographyDecompByVisibleRefpoints</span><wbr><span class="parameters">(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rotations,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> normals,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> beforePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> afterPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> possibleSolutions,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> pointsMask)</span></div>
|
|
<div class="block">Filters homography decompositions based on additional information.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>rotations</code> - Vector of rotation matrices.</dd>
|
|
<dd><code>normals</code> - Vector of plane normal matrices.</dd>
|
|
<dd><code>beforePoints</code> - Vector of (rectified) visible reference points before the homography is applied</dd>
|
|
<dd><code>afterPoints</code> - Vector of (rectified) visible reference points after the homography is applied</dd>
|
|
<dd><code>possibleSolutions</code> - Vector of int indices representing the viable solution set after filtering</dd>
|
|
<dd><code>pointsMask</code> - optional Mat/Vector of 8u type representing the mask for the inliers as given by the #findHomography function
|
|
|
|
This function is intended to filter the output of the #decomposeHomographyMat based on additional
|
|
information as described in CITE: Malis2007 . The summary of the method: the #decomposeHomographyMat function
|
|
returns 2 unique solutions and their "opposites" for a total of 4 solutions. If we have access to the
|
|
sets of points visible in the camera frame before and after the homography transformation is applied,
|
|
we can determine which are the true potential solutions and which are the opposites by verifying which
|
|
homographies are consistent with all visible reference points being in front of the camera. The inputs
|
|
are left unchanged; the filtered solution set is returned as indices into the existing one.</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="filterHomographyDecompByVisibleRefpoints(java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>filterHomographyDecompByVisibleRefpoints</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">filterHomographyDecompByVisibleRefpoints</span><wbr><span class="parameters">(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rotations,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> normals,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> beforePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> afterPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> possibleSolutions)</span></div>
|
|
<div class="block">Filters homography decompositions based on additional information.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>rotations</code> - Vector of rotation matrices.</dd>
|
|
<dd><code>normals</code> - Vector of plane normal matrices.</dd>
|
|
<dd><code>beforePoints</code> - Vector of (rectified) visible reference points before the homography is applied</dd>
|
|
<dd><code>afterPoints</code> - Vector of (rectified) visible reference points after the homography is applied</dd>
|
|
<dd><code>possibleSolutions</code> - Vector of int indices representing the viable solution set after filtering
|
|
|
|
This function is intended to filter the output of the #decomposeHomographyMat based on additional
|
|
information as described in CITE: Malis2007 . The summary of the method: the #decomposeHomographyMat function
|
|
returns 2 unique solutions and their "opposites" for a total of 4 solutions. If we have access to the
|
|
sets of points visible in the camera frame before and after the homography transformation is applied,
|
|
we can determine which are the true potential solutions and which are the opposites by verifying which
|
|
homographies are consistent with all visible reference points being in front of the camera. The inputs
|
|
are left unchanged; the filtered solution set is returned as indices into the existing one.</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="undistort(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>undistort</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">undistort</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> src,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dst,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> newCameraMatrix)</span></div>
|
|
<div class="block">Transforms an image to compensate for lens distortion.
|
|
|
|
The function transforms an image to compensate radial and tangential lens distortion.
|
|
|
|
The function is simply a combination of #initUndistortRectifyMap (with unity R ) and #remap
|
|
(with bilinear interpolation). See the former function for details of the transformation being
|
|
performed.
|
|
|
|
Those pixels in the destination image, for which there is no correspondent pixels in the source
|
|
image, are filled with zeros (black color).
|
|
|
|
A particular subset of the source image that will be visible in the corrected image can be regulated
|
|
by newCameraMatrix. You can use #getOptimalNewCameraMatrix to compute the appropriate
|
|
newCameraMatrix depending on your requirements.
|
|
|
|
The camera matrix and the distortion parameters can be determined using #calibrateCamera. If
|
|
the resolution of images is different from the resolution used at the calibration stage, \(f_x,
|
|
f_y, c_x\) and \(c_y\) need to be scaled accordingly, while the distortion coefficients remain
|
|
the same.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>src</code> - Input (distorted) image.</dd>
|
|
<dd><code>dst</code> - Output (corrected) image that has the same size and type as src .</dd>
|
|
<dd><code>cameraMatrix</code> - Input camera matrix \(A = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\) .</dd>
|
|
<dd><code>distCoeffs</code> - Input vector of distortion coefficients
|
|
\((k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\)
|
|
of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.</dd>
|
|
<dd><code>newCameraMatrix</code> - Camera matrix of the distorted image. By default, it is the same as
|
|
cameraMatrix but you may additionally scale and shift the result by using a different matrix.</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="undistort(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>undistort</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">undistort</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> src,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dst,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs)</span></div>
|
|
<div class="block">Transforms an image to compensate for lens distortion.
|
|
|
|
The function transforms an image to compensate radial and tangential lens distortion.
|
|
|
|
The function is simply a combination of #initUndistortRectifyMap (with unity R ) and #remap
|
|
(with bilinear interpolation). See the former function for details of the transformation being
|
|
performed.
|
|
|
|
Those pixels in the destination image, for which there is no correspondent pixels in the source
|
|
image, are filled with zeros (black color).
|
|
|
|
A particular subset of the source image that will be visible in the corrected image can be regulated
|
|
by newCameraMatrix. You can use #getOptimalNewCameraMatrix to compute the appropriate
|
|
newCameraMatrix depending on your requirements.
|
|
|
|
The camera matrix and the distortion parameters can be determined using #calibrateCamera. If
|
|
the resolution of images is different from the resolution used at the calibration stage, \(f_x,
|
|
f_y, c_x\) and \(c_y\) need to be scaled accordingly, while the distortion coefficients remain
|
|
the same.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>src</code> - Input (distorted) image.</dd>
|
|
<dd><code>dst</code> - Output (corrected) image that has the same size and type as src .</dd>
|
|
<dd><code>cameraMatrix</code> - Input camera matrix \(A = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\) .</dd>
|
|
<dd><code>distCoeffs</code> - Input vector of distortion coefficients
|
|
\((k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\)
|
|
of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.
|
|
cameraMatrix but you may additionally scale and shift the result by using a different matrix.</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="initUndistortRectifyMap(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,int,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>initUndistortRectifyMap</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">initUndistortRectifyMap</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> newCameraMatrix,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> size,
|
|
int m1type,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> map1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> map2)</span></div>
|
|
<div class="block">Computes the undistortion and rectification transformation map.
|
|
|
|
The function computes the joint undistortion and rectification transformation and represents the
|
|
result in the form of maps for #remap. The undistorted image looks like original, as if it is
|
|
captured with a camera using the camera matrix =newCameraMatrix and zero distortion. In case of a
|
|
monocular camera, newCameraMatrix is usually equal to cameraMatrix, or it can be computed by
|
|
#getOptimalNewCameraMatrix for a better control over scaling. In case of a stereo camera,
|
|
newCameraMatrix is normally set to P1 or P2 computed by #stereoRectify .
|
|
|
|
Also, this new camera is oriented differently in the coordinate space, according to R. That, for
|
|
example, helps to align two heads of a stereo camera so that the epipolar lines on both images
|
|
become horizontal and have the same y- coordinate (in case of a horizontally aligned stereo camera).
|
|
|
|
The function actually builds the maps for the inverse mapping algorithm that is used by #remap. That
|
|
is, for each pixel \((u, v)\) in the destination (corrected and rectified) image, the function
|
|
computes the corresponding coordinates in the source image (that is, in the original image from
|
|
camera). The following process is applied:
|
|
\(
|
|
\begin{array}{l}
|
|
x \leftarrow (u - {c'}_x)/{f'}_x \\
|
|
y \leftarrow (v - {c'}_y)/{f'}_y \\
|
|
{[X\,Y\,W]} ^T \leftarrow R^{-1}*[x \, y \, 1]^T \\
|
|
x' \leftarrow X/W \\
|
|
y' \leftarrow Y/W \\
|
|
r^2 \leftarrow x'^2 + y'^2 \\
|
|
x'' \leftarrow x' \frac{1 + k_1 r^2 + k_2 r^4 + k_3 r^6}{1 + k_4 r^2 + k_5 r^4 + k_6 r^6}
|
|
+ 2p_1 x' y' + p_2(r^2 + 2 x'^2) + s_1 r^2 + s_2 r^4\\
|
|
y'' \leftarrow y' \frac{1 + k_1 r^2 + k_2 r^4 + k_3 r^6}{1 + k_4 r^2 + k_5 r^4 + k_6 r^6}
|
|
+ p_1 (r^2 + 2 y'^2) + 2 p_2 x' y' + s_3 r^2 + s_4 r^4 \\
|
|
s\vecthree{x'''}{y'''}{1} =
|
|
\vecthreethree{R_{33}(\tau_x, \tau_y)}{0}{-R_{13}((\tau_x, \tau_y)}
|
|
{0}{R_{33}(\tau_x, \tau_y)}{-R_{23}(\tau_x, \tau_y)}
|
|
{0}{0}{1} R(\tau_x, \tau_y) \vecthree{x''}{y''}{1}\\
|
|
map_x(u,v) \leftarrow x''' f_x + c_x \\
|
|
map_y(u,v) \leftarrow y''' f_y + c_y
|
|
\end{array}
|
|
\)
|
|
where \((k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\)
|
|
are the distortion coefficients.
|
|
|
|
In case of a stereo camera, this function is called twice: once for each camera head, after
|
|
#stereoRectify, which in its turn is called after #stereoCalibrate. But if the stereo camera
|
|
was not calibrated, it is still possible to compute the rectification transformations directly from
|
|
the fundamental matrix using #stereoRectifyUncalibrated. For each camera, the function computes
|
|
homography H as the rectification transformation in a pixel domain, not a rotation matrix R in 3D
|
|
space. R can be computed from H as
|
|
\(\texttt{R} = \texttt{cameraMatrix} ^{-1} \cdot \texttt{H} \cdot \texttt{cameraMatrix}\)
|
|
where cameraMatrix can be chosen arbitrarily.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>cameraMatrix</code> - Input camera matrix \(A=\vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\) .</dd>
|
|
<dd><code>distCoeffs</code> - Input vector of distortion coefficients
|
|
\((k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\)
|
|
of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.</dd>
|
|
<dd><code>R</code> - Optional rectification transformation in the object space (3x3 matrix). R1 or R2 ,
|
|
computed by #stereoRectify can be passed here. If the matrix is empty, the identity transformation
|
|
is assumed. In #initUndistortRectifyMap R assumed to be an identity matrix.</dd>
|
|
<dd><code>newCameraMatrix</code> - New camera matrix \(A'=\vecthreethree{f_x'}{0}{c_x'}{0}{f_y'}{c_y'}{0}{0}{1}\).</dd>
|
|
<dd><code>size</code> - Undistorted image size.</dd>
|
|
<dd><code>m1type</code> - Type of the first output map that can be CV_32FC1, CV_32FC2 or CV_16SC2, see #convertMaps</dd>
|
|
<dd><code>map1</code> - The first output map.</dd>
|
|
<dd><code>map2</code> - The second output map.</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="initInverseRectificationMap(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,int,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>initInverseRectificationMap</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">initInverseRectificationMap</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> newCameraMatrix,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> size,
|
|
int m1type,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> map1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> map2)</span></div>
|
|
<div class="block">Computes the projection and inverse-rectification transformation map. In essense, this is the inverse of
|
|
#initUndistortRectifyMap to accomodate stereo-rectification of projectors ('inverse-cameras') in projector-camera pairs.
|
|
|
|
The function computes the joint projection and inverse rectification transformation and represents the
|
|
result in the form of maps for #remap. The projected image looks like a distorted version of the original which,
|
|
once projected by a projector, should visually match the original. In case of a monocular camera, newCameraMatrix
|
|
is usually equal to cameraMatrix, or it can be computed by
|
|
#getOptimalNewCameraMatrix for a better control over scaling. In case of a projector-camera pair,
|
|
newCameraMatrix is normally set to P1 or P2 computed by #stereoRectify .
|
|
|
|
The projector is oriented differently in the coordinate space, according to R. In case of projector-camera pairs,
|
|
this helps align the projector (in the same manner as #initUndistortRectifyMap for the camera) to create a stereo-rectified pair. This
|
|
allows epipolar lines on both images to become horizontal and have the same y-coordinate (in case of a horizontally aligned projector-camera pair).
|
|
|
|
The function builds the maps for the inverse mapping algorithm that is used by #remap. That
|
|
is, for each pixel \((u, v)\) in the destination (projected and inverse-rectified) image, the function
|
|
computes the corresponding coordinates in the source image (that is, in the original digital image). The following process is applied:
|
|
|
|
\(
|
|
\begin{array}{l}
|
|
\text{newCameraMatrix}\\
|
|
x \leftarrow (u - {c'}_x)/{f'}_x \\
|
|
y \leftarrow (v - {c'}_y)/{f'}_y \\
|
|
|
|
\\\text{Undistortion}
|
|
\\\scriptsize{\textit{though equation shown is for radial undistortion, function implements cv::undistortPoints()}}\\
|
|
r^2 \leftarrow x^2 + y^2 \\
|
|
\theta \leftarrow \frac{1 + k_1 r^2 + k_2 r^4 + k_3 r^6}{1 + k_4 r^2 + k_5 r^4 + k_6 r^6}\\
|
|
x' \leftarrow \frac{x}{\theta} \\
|
|
y' \leftarrow \frac{y}{\theta} \\
|
|
|
|
\\\text{Rectification}\\
|
|
{[X\,Y\,W]} ^T \leftarrow R*[x' \, y' \, 1]^T \\
|
|
x'' \leftarrow X/W \\
|
|
y'' \leftarrow Y/W \\
|
|
|
|
\\\text{cameraMatrix}\\
|
|
map_x(u,v) \leftarrow x'' f_x + c_x \\
|
|
map_y(u,v) \leftarrow y'' f_y + c_y
|
|
\end{array}
|
|
\)
|
|
where \((k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\)
|
|
are the distortion coefficients vector distCoeffs.
|
|
|
|
In case of a stereo-rectified projector-camera pair, this function is called for the projector while #initUndistortRectifyMap is called for the camera head.
|
|
This is done after #stereoRectify, which in turn is called after #stereoCalibrate. If the projector-camera pair
|
|
is not calibrated, it is still possible to compute the rectification transformations directly from
|
|
the fundamental matrix using #stereoRectifyUncalibrated. For the projector and camera, the function computes
|
|
homography H as the rectification transformation in a pixel domain, not a rotation matrix R in 3D
|
|
space. R can be computed from H as
|
|
\(\texttt{R} = \texttt{cameraMatrix} ^{-1} \cdot \texttt{H} \cdot \texttt{cameraMatrix}\)
|
|
where cameraMatrix can be chosen arbitrarily.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>cameraMatrix</code> - Input camera matrix \(A=\vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\) .</dd>
|
|
<dd><code>distCoeffs</code> - Input vector of distortion coefficients
|
|
\((k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\)
|
|
of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.</dd>
|
|
<dd><code>R</code> - Optional rectification transformation in the object space (3x3 matrix). R1 or R2,
|
|
computed by #stereoRectify can be passed here. If the matrix is empty, the identity transformation
|
|
is assumed.</dd>
|
|
<dd><code>newCameraMatrix</code> - New camera matrix \(A'=\vecthreethree{f_x'}{0}{c_x'}{0}{f_y'}{c_y'}{0}{0}{1}\).</dd>
|
|
<dd><code>size</code> - Distorted image size.</dd>
|
|
<dd><code>m1type</code> - Type of the first output map. Can be CV_32FC1, CV_32FC2 or CV_16SC2, see #convertMaps</dd>
|
|
<dd><code>map1</code> - The first output map for #remap.</dd>
|
|
<dd><code>map2</code> - The second output map for #remap.</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="getDefaultNewCameraMatrix(org.opencv.core.Mat,org.opencv.core.Size,boolean)">
|
|
<h3>getDefaultNewCameraMatrix</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">getDefaultNewCameraMatrix</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imgsize,
|
|
boolean centerPrincipalPoint)</span></div>
|
|
<div class="block">Returns the default new camera matrix.
|
|
|
|
The function returns the camera matrix that is either an exact copy of the input cameraMatrix (when
|
|
centerPrinicipalPoint=false ), or the modified one (when centerPrincipalPoint=true).
|
|
|
|
In the latter case, the new camera matrix will be:
|
|
|
|
\(\begin{bmatrix} f_x && 0 && ( \texttt{imgSize.width} -1)*0.5 \\ 0 && f_y && ( \texttt{imgSize.height} -1)*0.5 \\ 0 && 0 && 1 \end{bmatrix} ,\)
|
|
|
|
where \(f_x\) and \(f_y\) are \((0,0)\) and \((1,1)\) elements of cameraMatrix, respectively.
|
|
|
|
By default, the undistortion functions in OpenCV (see #initUndistortRectifyMap, #undistort) do not
|
|
move the principal point. However, when you work with stereo, it is important to move the principal
|
|
points in both views to the same y-coordinate (which is required by most of stereo correspondence
|
|
algorithms), and may be to the same x-coordinate too. So, you can form the new camera matrix for
|
|
each view where the principal points are located at the center.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>cameraMatrix</code> - Input camera matrix.</dd>
|
|
<dd><code>imgsize</code> - Camera view image size in pixels.</dd>
|
|
<dd><code>centerPrincipalPoint</code> - Location of the principal point in the new camera matrix. The
|
|
parameter indicates whether this location should be at the image center or not.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="getDefaultNewCameraMatrix(org.opencv.core.Mat,org.opencv.core.Size)">
|
|
<h3>getDefaultNewCameraMatrix</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">getDefaultNewCameraMatrix</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imgsize)</span></div>
|
|
<div class="block">Returns the default new camera matrix.
|
|
|
|
The function returns the camera matrix that is either an exact copy of the input cameraMatrix (when
|
|
centerPrinicipalPoint=false ), or the modified one (when centerPrincipalPoint=true).
|
|
|
|
In the latter case, the new camera matrix will be:
|
|
|
|
\(\begin{bmatrix} f_x && 0 && ( \texttt{imgSize.width} -1)*0.5 \\ 0 && f_y && ( \texttt{imgSize.height} -1)*0.5 \\ 0 && 0 && 1 \end{bmatrix} ,\)
|
|
|
|
where \(f_x\) and \(f_y\) are \((0,0)\) and \((1,1)\) elements of cameraMatrix, respectively.
|
|
|
|
By default, the undistortion functions in OpenCV (see #initUndistortRectifyMap, #undistort) do not
|
|
move the principal point. However, when you work with stereo, it is important to move the principal
|
|
points in both views to the same y-coordinate (which is required by most of stereo correspondence
|
|
algorithms), and may be to the same x-coordinate too. So, you can form the new camera matrix for
|
|
each view where the principal points are located at the center.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>cameraMatrix</code> - Input camera matrix.</dd>
|
|
<dd><code>imgsize</code> - Camera view image size in pixels.
|
|
parameter indicates whether this location should be at the image center or not.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="getDefaultNewCameraMatrix(org.opencv.core.Mat)">
|
|
<h3>getDefaultNewCameraMatrix</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type"><a href="../core/Mat.html" title="class in org.opencv.core">Mat</a></span> <span class="element-name">getDefaultNewCameraMatrix</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix)</span></div>
|
|
<div class="block">Returns the default new camera matrix.
|
|
|
|
The function returns the camera matrix that is either an exact copy of the input cameraMatrix (when
|
|
centerPrinicipalPoint=false ), or the modified one (when centerPrincipalPoint=true).
|
|
|
|
In the latter case, the new camera matrix will be:
|
|
|
|
\(\begin{bmatrix} f_x && 0 && ( \texttt{imgSize.width} -1)*0.5 \\ 0 && f_y && ( \texttt{imgSize.height} -1)*0.5 \\ 0 && 0 && 1 \end{bmatrix} ,\)
|
|
|
|
where \(f_x\) and \(f_y\) are \((0,0)\) and \((1,1)\) elements of cameraMatrix, respectively.
|
|
|
|
By default, the undistortion functions in OpenCV (see #initUndistortRectifyMap, #undistort) do not
|
|
move the principal point. However, when you work with stereo, it is important to move the principal
|
|
points in both views to the same y-coordinate (which is required by most of stereo correspondence
|
|
algorithms), and may be to the same x-coordinate too. So, you can form the new camera matrix for
|
|
each view where the principal points are located at the center.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>cameraMatrix</code> - Input camera matrix.
|
|
parameter indicates whether this location should be at the image center or not.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="undistortPoints(org.opencv.core.MatOfPoint2f,org.opencv.core.MatOfPoint2f,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>undistortPoints</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">undistortPoints</span><wbr><span class="parameters">(<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> src,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> dst,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P)</span></div>
|
|
<div class="block">Computes the ideal point coordinates from the observed point coordinates.
|
|
|
|
The function is similar to #undistort and #initUndistortRectifyMap but it operates on a
|
|
sparse set of points instead of a raster image. Also the function performs a reverse transformation
|
|
to #projectPoints. In case of a 3D object, it does not reconstruct its 3D coordinates, but for a
|
|
planar object, it does, up to a translation vector, if the proper R is specified.
|
|
|
|
For each observed point coordinate \((u, v)\) the function computes:
|
|
\(
|
|
\begin{array}{l}
|
|
x^{"} \leftarrow (u - c_x)/f_x \\
|
|
y^{"} \leftarrow (v - c_y)/f_y \\
|
|
(x',y') = undistort(x^{"},y^{"}, \texttt{distCoeffs}) \\
|
|
{[X\,Y\,W]} ^T \leftarrow R*[x' \, y' \, 1]^T \\
|
|
x \leftarrow X/W \\
|
|
y \leftarrow Y/W \\
|
|
\text{only performed if P is specified:} \\
|
|
u' \leftarrow x {f'}_x + {c'}_x \\
|
|
v' \leftarrow y {f'}_y + {c'}_y
|
|
\end{array}
|
|
\)
|
|
|
|
where *undistort* is an approximate iterative algorithm that estimates the normalized original
|
|
point coordinates out of the normalized distorted point coordinates ("normalized" means that the
|
|
coordinates do not depend on the camera matrix).
|
|
|
|
The function can be used for both a stereo camera head or a monocular camera (when R is empty).</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>src</code> - Observed point coordinates, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel (CV_32FC2 or CV_64FC2) (or
|
|
vector<Point2f> ).</dd>
|
|
<dd><code>dst</code> - Output ideal point coordinates (1xN/Nx1 2-channel or vector<Point2f> ) after undistortion and reverse perspective
|
|
transformation. If matrix P is identity or omitted, dst will contain normalized point coordinates.</dd>
|
|
<dd><code>cameraMatrix</code> - Camera matrix \(\vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\) .</dd>
|
|
<dd><code>distCoeffs</code> - Input vector of distortion coefficients
|
|
\((k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\)
|
|
of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.</dd>
|
|
<dd><code>R</code> - Rectification transformation in the object space (3x3 matrix). R1 or R2 computed by
|
|
#stereoRectify can be passed here. If the matrix is empty, the identity transformation is used.</dd>
|
|
<dd><code>P</code> - New camera matrix (3x3) or new projection matrix (3x4) \(\begin{bmatrix} {f'}_x & 0 & {c'}_x & t_x \\ 0 & {f'}_y & {c'}_y & t_y \\ 0 & 0 & 1 & t_z \end{bmatrix}\). P1 or P2 computed by
|
|
#stereoRectify can be passed here. If the matrix is empty, the identity new camera matrix is used.</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="undistortPoints(org.opencv.core.MatOfPoint2f,org.opencv.core.MatOfPoint2f,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>undistortPoints</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">undistortPoints</span><wbr><span class="parameters">(<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> src,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> dst,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R)</span></div>
|
|
<div class="block">Computes the ideal point coordinates from the observed point coordinates.
|
|
|
|
The function is similar to #undistort and #initUndistortRectifyMap but it operates on a
|
|
sparse set of points instead of a raster image. Also the function performs a reverse transformation
|
|
to #projectPoints. In case of a 3D object, it does not reconstruct its 3D coordinates, but for a
|
|
planar object, it does, up to a translation vector, if the proper R is specified.
|
|
|
|
For each observed point coordinate \((u, v)\) the function computes:
|
|
\(
|
|
\begin{array}{l}
|
|
x^{"} \leftarrow (u - c_x)/f_x \\
|
|
y^{"} \leftarrow (v - c_y)/f_y \\
|
|
(x',y') = undistort(x^{"},y^{"}, \texttt{distCoeffs}) \\
|
|
{[X\,Y\,W]} ^T \leftarrow R*[x' \, y' \, 1]^T \\
|
|
x \leftarrow X/W \\
|
|
y \leftarrow Y/W \\
|
|
\text{only performed if P is specified:} \\
|
|
u' \leftarrow x {f'}_x + {c'}_x \\
|
|
v' \leftarrow y {f'}_y + {c'}_y
|
|
\end{array}
|
|
\)
|
|
|
|
where *undistort* is an approximate iterative algorithm that estimates the normalized original
|
|
point coordinates out of the normalized distorted point coordinates ("normalized" means that the
|
|
coordinates do not depend on the camera matrix).
|
|
|
|
The function can be used for both a stereo camera head or a monocular camera (when R is empty).</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>src</code> - Observed point coordinates, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel (CV_32FC2 or CV_64FC2) (or
|
|
vector<Point2f> ).</dd>
|
|
<dd><code>dst</code> - Output ideal point coordinates (1xN/Nx1 2-channel or vector<Point2f> ) after undistortion and reverse perspective
|
|
transformation. If matrix P is identity or omitted, dst will contain normalized point coordinates.</dd>
|
|
<dd><code>cameraMatrix</code> - Camera matrix \(\vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\) .</dd>
|
|
<dd><code>distCoeffs</code> - Input vector of distortion coefficients
|
|
\((k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\)
|
|
of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.</dd>
|
|
<dd><code>R</code> - Rectification transformation in the object space (3x3 matrix). R1 or R2 computed by
|
|
#stereoRectify can be passed here. If the matrix is empty, the identity transformation is used.
|
|
#stereoRectify can be passed here. If the matrix is empty, the identity new camera matrix is used.</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="undistortPoints(org.opencv.core.MatOfPoint2f,org.opencv.core.MatOfPoint2f,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>undistortPoints</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">undistortPoints</span><wbr><span class="parameters">(<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> src,
|
|
<a href="../core/MatOfPoint2f.html" title="class in org.opencv.core">MatOfPoint2f</a> dst,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs)</span></div>
|
|
<div class="block">Computes the ideal point coordinates from the observed point coordinates.
|
|
|
|
The function is similar to #undistort and #initUndistortRectifyMap but it operates on a
|
|
sparse set of points instead of a raster image. Also the function performs a reverse transformation
|
|
to #projectPoints. In case of a 3D object, it does not reconstruct its 3D coordinates, but for a
|
|
planar object, it does, up to a translation vector, if the proper R is specified.
|
|
|
|
For each observed point coordinate \((u, v)\) the function computes:
|
|
\(
|
|
\begin{array}{l}
|
|
x^{"} \leftarrow (u - c_x)/f_x \\
|
|
y^{"} \leftarrow (v - c_y)/f_y \\
|
|
(x',y') = undistort(x^{"},y^{"}, \texttt{distCoeffs}) \\
|
|
{[X\,Y\,W]} ^T \leftarrow R*[x' \, y' \, 1]^T \\
|
|
x \leftarrow X/W \\
|
|
y \leftarrow Y/W \\
|
|
\text{only performed if P is specified:} \\
|
|
u' \leftarrow x {f'}_x + {c'}_x \\
|
|
v' \leftarrow y {f'}_y + {c'}_y
|
|
\end{array}
|
|
\)
|
|
|
|
where *undistort* is an approximate iterative algorithm that estimates the normalized original
|
|
point coordinates out of the normalized distorted point coordinates ("normalized" means that the
|
|
coordinates do not depend on the camera matrix).
|
|
|
|
The function can be used for both a stereo camera head or a monocular camera (when R is empty).</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>src</code> - Observed point coordinates, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel (CV_32FC2 or CV_64FC2) (or
|
|
vector<Point2f> ).</dd>
|
|
<dd><code>dst</code> - Output ideal point coordinates (1xN/Nx1 2-channel or vector<Point2f> ) after undistortion and reverse perspective
|
|
transformation. If matrix P is identity or omitted, dst will contain normalized point coordinates.</dd>
|
|
<dd><code>cameraMatrix</code> - Camera matrix \(\vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\) .</dd>
|
|
<dd><code>distCoeffs</code> - Input vector of distortion coefficients
|
|
\((k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\)
|
|
of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.
|
|
#stereoRectify can be passed here. If the matrix is empty, the identity transformation is used.
|
|
#stereoRectify can be passed here. If the matrix is empty, the identity new camera matrix is used.</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="undistortPointsIter(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.TermCriteria)">
|
|
<h3>undistortPointsIter</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">undistortPointsIter</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> src,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dst,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P,
|
|
<a href="../core/TermCriteria.html" title="class in org.opencv.core">TermCriteria</a> criteria)</span></div>
|
|
<div class="block"><b>Note:</b> Default version of #undistortPoints does 5 iterations to compute undistorted points.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>src</code> - automatically generated</dd>
|
|
<dd><code>dst</code> - automatically generated</dd>
|
|
<dd><code>cameraMatrix</code> - automatically generated</dd>
|
|
<dd><code>distCoeffs</code> - automatically generated</dd>
|
|
<dd><code>R</code> - automatically generated</dd>
|
|
<dd><code>P</code> - automatically generated</dd>
|
|
<dd><code>criteria</code> - automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="undistortImagePoints(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.TermCriteria)">
|
|
<h3>undistortImagePoints</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">undistortImagePoints</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> src,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dst,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/TermCriteria.html" title="class in org.opencv.core">TermCriteria</a> arg1)</span></div>
|
|
<div class="block">Compute undistorted image points position</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>src</code> - Observed points position, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel (CV_32FC2 or
|
|
CV_64FC2) (or vector<Point2f> ).</dd>
|
|
<dd><code>dst</code> - Output undistorted points position (1xN/Nx1 2-channel or vector<Point2f> ).</dd>
|
|
<dd><code>cameraMatrix</code> - Camera matrix \(\vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\) .</dd>
|
|
<dd><code>distCoeffs</code> - Distortion coefficients</dd>
|
|
<dd><code>arg1</code> - automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="undistortImagePoints(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>undistortImagePoints</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">undistortImagePoints</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> src,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> dst,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs)</span></div>
|
|
<div class="block">Compute undistorted image points position</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>src</code> - Observed points position, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel (CV_32FC2 or
|
|
CV_64FC2) (or vector<Point2f> ).</dd>
|
|
<dd><code>dst</code> - Output undistorted points position (1xN/Nx1 2-channel or vector<Point2f> ).</dd>
|
|
<dd><code>cameraMatrix</code> - Camera matrix \(\vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\) .</dd>
|
|
<dd><code>distCoeffs</code> - Distortion coefficients</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_projectPoints(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,double,org.opencv.core.Mat)">
|
|
<h3>fisheye_projectPoints</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">fisheye_projectPoints</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D,
|
|
double alpha,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> jacobian)</span></div>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_projectPoints(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,double)">
|
|
<h3>fisheye_projectPoints</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">fisheye_projectPoints</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D,
|
|
double alpha)</span></div>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_projectPoints(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>fisheye_projectPoints</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">fisheye_projectPoints</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D)</span></div>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_distortPoints(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,double)">
|
|
<h3>fisheye_distortPoints</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">fisheye_distortPoints</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> undistorted,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distorted,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D,
|
|
double alpha)</span></div>
|
|
<div class="block">Distorts 2D points using fisheye model.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>undistorted</code> - Array of object points, 1xN/Nx1 2-channel (or vector<Point2f> ), where N is
|
|
the number of points in the view.</dd>
|
|
<dd><code>K</code> - Camera intrinsic matrix \(\cameramatrix{K}\).</dd>
|
|
<dd><code>D</code> - Input vector of distortion coefficients \(\distcoeffsfisheye\).</dd>
|
|
<dd><code>alpha</code> - The skew coefficient.</dd>
|
|
<dd><code>distorted</code> - Output array of image points, 1xN/Nx1 2-channel, or vector<Point2f> .
|
|
|
|
Note that the function assumes the camera intrinsic matrix of the undistorted points to be identity.
|
|
This means if you want to distort image points you have to multiply them with \(K^{-1}\) or
|
|
use another function overload.</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_distortPoints(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>fisheye_distortPoints</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">fisheye_distortPoints</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> undistorted,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distorted,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D)</span></div>
|
|
<div class="block">Distorts 2D points using fisheye model.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>undistorted</code> - Array of object points, 1xN/Nx1 2-channel (or vector<Point2f> ), where N is
|
|
the number of points in the view.</dd>
|
|
<dd><code>K</code> - Camera intrinsic matrix \(\cameramatrix{K}\).</dd>
|
|
<dd><code>D</code> - Input vector of distortion coefficients \(\distcoeffsfisheye\).</dd>
|
|
<dd><code>distorted</code> - Output array of image points, 1xN/Nx1 2-channel, or vector<Point2f> .
|
|
|
|
Note that the function assumes the camera intrinsic matrix of the undistorted points to be identity.
|
|
This means if you want to distort image points you have to multiply them with \(K^{-1}\) or
|
|
use another function overload.</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_distortPoints(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,double)">
|
|
<h3>fisheye_distortPoints</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">fisheye_distortPoints</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> undistorted,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distorted,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> Kundistorted,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D,
|
|
double alpha)</span></div>
|
|
<div class="block">Overload of distortPoints function to handle cases when undistorted points are obtained with non-identity
|
|
camera matrix, e.g. output of #estimateNewCameraMatrixForUndistortRectify.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>undistorted</code> - Array of object points, 1xN/Nx1 2-channel (or vector<Point2f> ), where N is
|
|
the number of points in the view.</dd>
|
|
<dd><code>Kundistorted</code> - Camera intrinsic matrix used as new camera matrix for undistortion.</dd>
|
|
<dd><code>K</code> - Camera intrinsic matrix \(\cameramatrix{K}\).</dd>
|
|
<dd><code>D</code> - Input vector of distortion coefficients \(\distcoeffsfisheye\).</dd>
|
|
<dd><code>alpha</code> - The skew coefficient.</dd>
|
|
<dd><code>distorted</code> - Output array of image points, 1xN/Nx1 2-channel, or vector<Point2f> .
|
|
SEE: estimateNewCameraMatrixForUndistortRectify</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_distortPoints(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>fisheye_distortPoints</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">fisheye_distortPoints</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> undistorted,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distorted,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> Kundistorted,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D)</span></div>
|
|
<div class="block">Overload of distortPoints function to handle cases when undistorted points are obtained with non-identity
|
|
camera matrix, e.g. output of #estimateNewCameraMatrixForUndistortRectify.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>undistorted</code> - Array of object points, 1xN/Nx1 2-channel (or vector<Point2f> ), where N is
|
|
the number of points in the view.</dd>
|
|
<dd><code>Kundistorted</code> - Camera intrinsic matrix used as new camera matrix for undistortion.</dd>
|
|
<dd><code>K</code> - Camera intrinsic matrix \(\cameramatrix{K}\).</dd>
|
|
<dd><code>D</code> - Input vector of distortion coefficients \(\distcoeffsfisheye\).</dd>
|
|
<dd><code>distorted</code> - Output array of image points, 1xN/Nx1 2-channel, or vector<Point2f> .
|
|
SEE: estimateNewCameraMatrixForUndistortRectify</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_undistortPoints(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.TermCriteria)">
|
|
<h3>fisheye_undistortPoints</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">fisheye_undistortPoints</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distorted,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> undistorted,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P,
|
|
<a href="../core/TermCriteria.html" title="class in org.opencv.core">TermCriteria</a> criteria)</span></div>
|
|
<div class="block">Undistorts 2D points using fisheye model</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>distorted</code> - Array of object points, 1xN/Nx1 2-channel (or vector<Point2f> ), where N is the
|
|
number of points in the view.</dd>
|
|
<dd><code>K</code> - Camera intrinsic matrix \(\cameramatrix{K}\).</dd>
|
|
<dd><code>D</code> - Input vector of distortion coefficients \(\distcoeffsfisheye\).</dd>
|
|
<dd><code>R</code> - Rectification transformation in the object space: 3x3 1-channel, or vector: 3x1/1x3
|
|
1-channel or 1x1 3-channel</dd>
|
|
<dd><code>P</code> - New camera intrinsic matrix (3x3) or new projection matrix (3x4)</dd>
|
|
<dd><code>criteria</code> - Termination criteria</dd>
|
|
<dd><code>undistorted</code> - Output array of image points, 1xN/Nx1 2-channel, or vector<Point2f> .</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_undistortPoints(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>fisheye_undistortPoints</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">fisheye_undistortPoints</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distorted,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> undistorted,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P)</span></div>
|
|
<div class="block">Undistorts 2D points using fisheye model</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>distorted</code> - Array of object points, 1xN/Nx1 2-channel (or vector<Point2f> ), where N is the
|
|
number of points in the view.</dd>
|
|
<dd><code>K</code> - Camera intrinsic matrix \(\cameramatrix{K}\).</dd>
|
|
<dd><code>D</code> - Input vector of distortion coefficients \(\distcoeffsfisheye\).</dd>
|
|
<dd><code>R</code> - Rectification transformation in the object space: 3x3 1-channel, or vector: 3x1/1x3
|
|
1-channel or 1x1 3-channel</dd>
|
|
<dd><code>P</code> - New camera intrinsic matrix (3x3) or new projection matrix (3x4)</dd>
|
|
<dd><code>undistorted</code> - Output array of image points, 1xN/Nx1 2-channel, or vector<Point2f> .</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_undistortPoints(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>fisheye_undistortPoints</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">fisheye_undistortPoints</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distorted,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> undistorted,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R)</span></div>
|
|
<div class="block">Undistorts 2D points using fisheye model</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>distorted</code> - Array of object points, 1xN/Nx1 2-channel (or vector<Point2f> ), where N is the
|
|
number of points in the view.</dd>
|
|
<dd><code>K</code> - Camera intrinsic matrix \(\cameramatrix{K}\).</dd>
|
|
<dd><code>D</code> - Input vector of distortion coefficients \(\distcoeffsfisheye\).</dd>
|
|
<dd><code>R</code> - Rectification transformation in the object space: 3x3 1-channel, or vector: 3x1/1x3
|
|
1-channel or 1x1 3-channel</dd>
|
|
<dd><code>undistorted</code> - Output array of image points, 1xN/Nx1 2-channel, or vector<Point2f> .</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_undistortPoints(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>fisheye_undistortPoints</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">fisheye_undistortPoints</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distorted,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> undistorted,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D)</span></div>
|
|
<div class="block">Undistorts 2D points using fisheye model</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>distorted</code> - Array of object points, 1xN/Nx1 2-channel (or vector<Point2f> ), where N is the
|
|
number of points in the view.</dd>
|
|
<dd><code>K</code> - Camera intrinsic matrix \(\cameramatrix{K}\).</dd>
|
|
<dd><code>D</code> - Input vector of distortion coefficients \(\distcoeffsfisheye\).
|
|
1-channel or 1x1 3-channel</dd>
|
|
<dd><code>undistorted</code> - Output array of image points, 1xN/Nx1 2-channel, or vector<Point2f> .</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_initUndistortRectifyMap(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,int,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>fisheye_initUndistortRectifyMap</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">fisheye_initUndistortRectifyMap</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> size,
|
|
int m1type,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> map1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> map2)</span></div>
|
|
<div class="block">Computes undistortion and rectification maps for image transform by #remap. If D is empty zero
|
|
distortion is used, if R or P is empty identity matrixes are used.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>K</code> - Camera intrinsic matrix \(\cameramatrix{K}\).</dd>
|
|
<dd><code>D</code> - Input vector of distortion coefficients \(\distcoeffsfisheye\).</dd>
|
|
<dd><code>R</code> - Rectification transformation in the object space: 3x3 1-channel, or vector: 3x1/1x3
|
|
1-channel or 1x1 3-channel</dd>
|
|
<dd><code>P</code> - New camera intrinsic matrix (3x3) or new projection matrix (3x4)</dd>
|
|
<dd><code>size</code> - Undistorted image size.</dd>
|
|
<dd><code>m1type</code> - Type of the first output map that can be CV_32FC1 or CV_16SC2 . See #convertMaps
|
|
for details.</dd>
|
|
<dd><code>map1</code> - The first output map.</dd>
|
|
<dd><code>map2</code> - The second output map.</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_undistortImage(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size)">
|
|
<h3>fisheye_undistortImage</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">fisheye_undistortImage</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distorted,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> undistorted,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> Knew,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> new_size)</span></div>
|
|
<div class="block">Transforms an image to compensate for fisheye lens distortion.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>distorted</code> - image with fisheye lens distortion.</dd>
|
|
<dd><code>undistorted</code> - Output image with compensated fisheye lens distortion.</dd>
|
|
<dd><code>K</code> - Camera intrinsic matrix \(\cameramatrix{K}\).</dd>
|
|
<dd><code>D</code> - Input vector of distortion coefficients \(\distcoeffsfisheye\).</dd>
|
|
<dd><code>Knew</code> - Camera intrinsic matrix of the distorted image. By default, it is the identity matrix but you
|
|
may additionally scale and shift the result by using a different matrix.</dd>
|
|
<dd><code>new_size</code> - the new size
|
|
|
|
The function transforms an image to compensate radial and tangential lens distortion.
|
|
|
|
The function is simply a combination of #fisheye::initUndistortRectifyMap (with unity R ) and #remap
|
|
(with bilinear interpolation). See the former function for details of the transformation being
|
|
performed.
|
|
|
|
See below the results of undistortImage.
|
|
<ul>
|
|
<li>
|
|
a\) result of undistort of perspective camera model (all possible coefficients (k_1, k_2, k_3,
|
|
k_4, k_5, k_6) of distortion were optimized under calibration)
|
|
<ul>
|
|
<li>
|
|
b\) result of #fisheye::undistortImage of fisheye camera model (all possible coefficients (k_1, k_2,
|
|
k_3, k_4) of fisheye distortion were optimized under calibration)
|
|
</li>
|
|
<li>
|
|
c\) original image was captured with fisheye lens
|
|
</li>
|
|
</ul>
|
|
|
|
Pictures a) and b) almost the same. But if we consider points of image located far from the center
|
|
of image, we can notice that on image a) these points are distorted.
|
|
</li>
|
|
</ul>
|
|
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_undistortImage(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>fisheye_undistortImage</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">fisheye_undistortImage</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distorted,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> undistorted,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> Knew)</span></div>
|
|
<div class="block">Transforms an image to compensate for fisheye lens distortion.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>distorted</code> - image with fisheye lens distortion.</dd>
|
|
<dd><code>undistorted</code> - Output image with compensated fisheye lens distortion.</dd>
|
|
<dd><code>K</code> - Camera intrinsic matrix \(\cameramatrix{K}\).</dd>
|
|
<dd><code>D</code> - Input vector of distortion coefficients \(\distcoeffsfisheye\).</dd>
|
|
<dd><code>Knew</code> - Camera intrinsic matrix of the distorted image. By default, it is the identity matrix but you
|
|
may additionally scale and shift the result by using a different matrix.
|
|
|
|
The function transforms an image to compensate radial and tangential lens distortion.
|
|
|
|
The function is simply a combination of #fisheye::initUndistortRectifyMap (with unity R ) and #remap
|
|
(with bilinear interpolation). See the former function for details of the transformation being
|
|
performed.
|
|
|
|
See below the results of undistortImage.
|
|
<ul>
|
|
<li>
|
|
a\) result of undistort of perspective camera model (all possible coefficients (k_1, k_2, k_3,
|
|
k_4, k_5, k_6) of distortion were optimized under calibration)
|
|
<ul>
|
|
<li>
|
|
b\) result of #fisheye::undistortImage of fisheye camera model (all possible coefficients (k_1, k_2,
|
|
k_3, k_4) of fisheye distortion were optimized under calibration)
|
|
</li>
|
|
<li>
|
|
c\) original image was captured with fisheye lens
|
|
</li>
|
|
</ul>
|
|
|
|
Pictures a) and b) almost the same. But if we consider points of image located far from the center
|
|
of image, we can notice that on image a) these points are distorted.
|
|
</li>
|
|
</ul>
|
|
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_undistortImage(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>fisheye_undistortImage</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">fisheye_undistortImage</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distorted,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> undistorted,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D)</span></div>
|
|
<div class="block">Transforms an image to compensate for fisheye lens distortion.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>distorted</code> - image with fisheye lens distortion.</dd>
|
|
<dd><code>undistorted</code> - Output image with compensated fisheye lens distortion.</dd>
|
|
<dd><code>K</code> - Camera intrinsic matrix \(\cameramatrix{K}\).</dd>
|
|
<dd><code>D</code> - Input vector of distortion coefficients \(\distcoeffsfisheye\).
|
|
may additionally scale and shift the result by using a different matrix.
|
|
|
|
The function transforms an image to compensate radial and tangential lens distortion.
|
|
|
|
The function is simply a combination of #fisheye::initUndistortRectifyMap (with unity R ) and #remap
|
|
(with bilinear interpolation). See the former function for details of the transformation being
|
|
performed.
|
|
|
|
See below the results of undistortImage.
|
|
<ul>
|
|
<li>
|
|
a\) result of undistort of perspective camera model (all possible coefficients (k_1, k_2, k_3,
|
|
k_4, k_5, k_6) of distortion were optimized under calibration)
|
|
<ul>
|
|
<li>
|
|
b\) result of #fisheye::undistortImage of fisheye camera model (all possible coefficients (k_1, k_2,
|
|
k_3, k_4) of fisheye distortion were optimized under calibration)
|
|
</li>
|
|
<li>
|
|
c\) original image was captured with fisheye lens
|
|
</li>
|
|
</ul>
|
|
|
|
Pictures a) and b) almost the same. But if we consider points of image located far from the center
|
|
of image, we can notice that on image a) these points are distorted.
|
|
</li>
|
|
</ul>
|
|
|
|
</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_estimateNewCameraMatrixForUndistortRectify(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,double,org.opencv.core.Size,double)">
|
|
<h3>fisheye_estimateNewCameraMatrixForUndistortRectify</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">fisheye_estimateNewCameraMatrixForUndistortRectify</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> image_size,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P,
|
|
double balance,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> new_size,
|
|
double fov_scale)</span></div>
|
|
<div class="block">Estimates new camera intrinsic matrix for undistortion or rectification.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>K</code> - Camera intrinsic matrix \(\cameramatrix{K}\).</dd>
|
|
<dd><code>image_size</code> - Size of the image</dd>
|
|
<dd><code>D</code> - Input vector of distortion coefficients \(\distcoeffsfisheye\).</dd>
|
|
<dd><code>R</code> - Rectification transformation in the object space: 3x3 1-channel, or vector: 3x1/1x3
|
|
1-channel or 1x1 3-channel</dd>
|
|
<dd><code>P</code> - New camera intrinsic matrix (3x3) or new projection matrix (3x4)</dd>
|
|
<dd><code>balance</code> - Sets the new focal length in range between the min focal length and the max focal
|
|
length. Balance is in range of [0, 1].</dd>
|
|
<dd><code>new_size</code> - the new size</dd>
|
|
<dd><code>fov_scale</code> - Divisor for new focal length.</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_estimateNewCameraMatrixForUndistortRectify(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,double,org.opencv.core.Size)">
|
|
<h3>fisheye_estimateNewCameraMatrixForUndistortRectify</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">fisheye_estimateNewCameraMatrixForUndistortRectify</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> image_size,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P,
|
|
double balance,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> new_size)</span></div>
|
|
<div class="block">Estimates new camera intrinsic matrix for undistortion or rectification.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>K</code> - Camera intrinsic matrix \(\cameramatrix{K}\).</dd>
|
|
<dd><code>image_size</code> - Size of the image</dd>
|
|
<dd><code>D</code> - Input vector of distortion coefficients \(\distcoeffsfisheye\).</dd>
|
|
<dd><code>R</code> - Rectification transformation in the object space: 3x3 1-channel, or vector: 3x1/1x3
|
|
1-channel or 1x1 3-channel</dd>
|
|
<dd><code>P</code> - New camera intrinsic matrix (3x3) or new projection matrix (3x4)</dd>
|
|
<dd><code>balance</code> - Sets the new focal length in range between the min focal length and the max focal
|
|
length. Balance is in range of [0, 1].</dd>
|
|
<dd><code>new_size</code> - the new size</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_estimateNewCameraMatrixForUndistortRectify(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,double)">
|
|
<h3>fisheye_estimateNewCameraMatrixForUndistortRectify</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">fisheye_estimateNewCameraMatrixForUndistortRectify</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> image_size,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P,
|
|
double balance)</span></div>
|
|
<div class="block">Estimates new camera intrinsic matrix for undistortion or rectification.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>K</code> - Camera intrinsic matrix \(\cameramatrix{K}\).</dd>
|
|
<dd><code>image_size</code> - Size of the image</dd>
|
|
<dd><code>D</code> - Input vector of distortion coefficients \(\distcoeffsfisheye\).</dd>
|
|
<dd><code>R</code> - Rectification transformation in the object space: 3x3 1-channel, or vector: 3x1/1x3
|
|
1-channel or 1x1 3-channel</dd>
|
|
<dd><code>P</code> - New camera intrinsic matrix (3x3) or new projection matrix (3x4)</dd>
|
|
<dd><code>balance</code> - Sets the new focal length in range between the min focal length and the max focal
|
|
length. Balance is in range of [0, 1].</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_estimateNewCameraMatrixForUndistortRectify(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>fisheye_estimateNewCameraMatrixForUndistortRectify</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">fisheye_estimateNewCameraMatrixForUndistortRectify</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> image_size,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P)</span></div>
|
|
<div class="block">Estimates new camera intrinsic matrix for undistortion or rectification.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>K</code> - Camera intrinsic matrix \(\cameramatrix{K}\).</dd>
|
|
<dd><code>image_size</code> - Size of the image</dd>
|
|
<dd><code>D</code> - Input vector of distortion coefficients \(\distcoeffsfisheye\).</dd>
|
|
<dd><code>R</code> - Rectification transformation in the object space: 3x3 1-channel, or vector: 3x1/1x3
|
|
1-channel or 1x1 3-channel</dd>
|
|
<dd><code>P</code> - New camera intrinsic matrix (3x3) or new projection matrix (3x4)
|
|
length. Balance is in range of [0, 1].</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_calibrate(java.util.List,java.util.List,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List,int,org.opencv.core.TermCriteria)">
|
|
<h3>fisheye_calibrate</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">double</span> <span class="element-name">fisheye_calibrate</span><wbr><span class="parameters">(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> image_size,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs,
|
|
int flags,
|
|
<a href="../core/TermCriteria.html" title="class in org.opencv.core">TermCriteria</a> criteria)</span></div>
|
|
<div class="block">Performs camera calibration</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - vector of vectors of calibration pattern points in the calibration pattern
|
|
coordinate space.</dd>
|
|
<dd><code>imagePoints</code> - vector of vectors of the projections of calibration pattern points.
|
|
imagePoints.size() and objectPoints.size() and imagePoints[i].size() must be equal to
|
|
objectPoints[i].size() for each i.</dd>
|
|
<dd><code>image_size</code> - Size of the image used only to initialize the camera intrinsic matrix.</dd>
|
|
<dd><code>K</code> - Output 3x3 floating-point camera intrinsic matrix
|
|
\(\cameramatrix{A}\) . If
|
|
REF: fisheye::CALIB_USE_INTRINSIC_GUESS is specified, some or all of fx, fy, cx, cy must be
|
|
initialized before calling the function.</dd>
|
|
<dd><code>D</code> - Output vector of distortion coefficients \(\distcoeffsfisheye\).</dd>
|
|
<dd><code>rvecs</code> - Output vector of rotation vectors (see REF: Rodrigues ) estimated for each pattern view.
|
|
That is, each k-th rotation vector together with the corresponding k-th translation vector (see
|
|
the next output parameter description) brings the calibration pattern from the model coordinate
|
|
space (in which object points are specified) to the world coordinate space, that is, a real
|
|
position of the calibration pattern in the k-th pattern view (k=0.. *M* -1).</dd>
|
|
<dd><code>tvecs</code> - Output vector of translation vectors estimated for each pattern view.</dd>
|
|
<dd><code>flags</code> - Different flags that may be zero or a combination of the following values:
|
|
<ul>
|
|
<li>
|
|
REF: fisheye::CALIB_USE_INTRINSIC_GUESS cameraMatrix contains valid initial values of
|
|
fx, fy, cx, cy that are optimized further. Otherwise, (cx, cy) is initially set to the image
|
|
center ( imageSize is used), and focal distances are computed in a least-squares fashion.
|
|
</li>
|
|
<li>
|
|
REF: fisheye::CALIB_RECOMPUTE_EXTRINSIC Extrinsic will be recomputed after each iteration
|
|
of intrinsic optimization.
|
|
</li>
|
|
<li>
|
|
REF: fisheye::CALIB_CHECK_COND The functions will check validity of condition number.
|
|
</li>
|
|
<li>
|
|
REF: fisheye::CALIB_FIX_SKEW Skew coefficient (alpha) is set to zero and stay zero.
|
|
</li>
|
|
<li>
|
|
REF: fisheye::CALIB_FIX_K1,..., REF: fisheye::CALIB_FIX_K4 Selected distortion coefficients
|
|
are set to zeros and stay zero.
|
|
</li>
|
|
<li>
|
|
REF: fisheye::CALIB_FIX_PRINCIPAL_POINT The principal point is not changed during the global
|
|
optimization. It stays at the center or at a different location specified when REF: fisheye::CALIB_USE_INTRINSIC_GUESS is set too.
|
|
</li>
|
|
<li>
|
|
REF: fisheye::CALIB_FIX_FOCAL_LENGTH The focal length is not changed during the global
|
|
optimization. It is the \(max(width,height)/\pi\) or the provided \(f_x\), \(f_y\) when REF: fisheye::CALIB_USE_INTRINSIC_GUESS is set too.
|
|
</li>
|
|
</ul></dd>
|
|
<dd><code>criteria</code> - Termination criteria for the iterative optimization algorithm.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_calibrate(java.util.List,java.util.List,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List,int)">
|
|
<h3>fisheye_calibrate</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">double</span> <span class="element-name">fisheye_calibrate</span><wbr><span class="parameters">(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> image_size,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs,
|
|
int flags)</span></div>
|
|
<div class="block">Performs camera calibration</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - vector of vectors of calibration pattern points in the calibration pattern
|
|
coordinate space.</dd>
|
|
<dd><code>imagePoints</code> - vector of vectors of the projections of calibration pattern points.
|
|
imagePoints.size() and objectPoints.size() and imagePoints[i].size() must be equal to
|
|
objectPoints[i].size() for each i.</dd>
|
|
<dd><code>image_size</code> - Size of the image used only to initialize the camera intrinsic matrix.</dd>
|
|
<dd><code>K</code> - Output 3x3 floating-point camera intrinsic matrix
|
|
\(\cameramatrix{A}\) . If
|
|
REF: fisheye::CALIB_USE_INTRINSIC_GUESS is specified, some or all of fx, fy, cx, cy must be
|
|
initialized before calling the function.</dd>
|
|
<dd><code>D</code> - Output vector of distortion coefficients \(\distcoeffsfisheye\).</dd>
|
|
<dd><code>rvecs</code> - Output vector of rotation vectors (see REF: Rodrigues ) estimated for each pattern view.
|
|
That is, each k-th rotation vector together with the corresponding k-th translation vector (see
|
|
the next output parameter description) brings the calibration pattern from the model coordinate
|
|
space (in which object points are specified) to the world coordinate space, that is, a real
|
|
position of the calibration pattern in the k-th pattern view (k=0.. *M* -1).</dd>
|
|
<dd><code>tvecs</code> - Output vector of translation vectors estimated for each pattern view.</dd>
|
|
<dd><code>flags</code> - Different flags that may be zero or a combination of the following values:
|
|
<ul>
|
|
<li>
|
|
REF: fisheye::CALIB_USE_INTRINSIC_GUESS cameraMatrix contains valid initial values of
|
|
fx, fy, cx, cy that are optimized further. Otherwise, (cx, cy) is initially set to the image
|
|
center ( imageSize is used), and focal distances are computed in a least-squares fashion.
|
|
</li>
|
|
<li>
|
|
REF: fisheye::CALIB_RECOMPUTE_EXTRINSIC Extrinsic will be recomputed after each iteration
|
|
of intrinsic optimization.
|
|
</li>
|
|
<li>
|
|
REF: fisheye::CALIB_CHECK_COND The functions will check validity of condition number.
|
|
</li>
|
|
<li>
|
|
REF: fisheye::CALIB_FIX_SKEW Skew coefficient (alpha) is set to zero and stay zero.
|
|
</li>
|
|
<li>
|
|
REF: fisheye::CALIB_FIX_K1,..., REF: fisheye::CALIB_FIX_K4 Selected distortion coefficients
|
|
are set to zeros and stay zero.
|
|
</li>
|
|
<li>
|
|
REF: fisheye::CALIB_FIX_PRINCIPAL_POINT The principal point is not changed during the global
|
|
optimization. It stays at the center or at a different location specified when REF: fisheye::CALIB_USE_INTRINSIC_GUESS is set too.
|
|
</li>
|
|
<li>
|
|
REF: fisheye::CALIB_FIX_FOCAL_LENGTH The focal length is not changed during the global
|
|
optimization. It is the \(max(width,height)/\pi\) or the provided \(f_x\), \(f_y\) when REF: fisheye::CALIB_USE_INTRINSIC_GUESS is set too.
|
|
</li>
|
|
</ul></dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_calibrate(java.util.List,java.util.List,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List)">
|
|
<h3>fisheye_calibrate</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">double</span> <span class="element-name">fisheye_calibrate</span><wbr><span class="parameters">(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> image_size,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs)</span></div>
|
|
<div class="block">Performs camera calibration</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - vector of vectors of calibration pattern points in the calibration pattern
|
|
coordinate space.</dd>
|
|
<dd><code>imagePoints</code> - vector of vectors of the projections of calibration pattern points.
|
|
imagePoints.size() and objectPoints.size() and imagePoints[i].size() must be equal to
|
|
objectPoints[i].size() for each i.</dd>
|
|
<dd><code>image_size</code> - Size of the image used only to initialize the camera intrinsic matrix.</dd>
|
|
<dd><code>K</code> - Output 3x3 floating-point camera intrinsic matrix
|
|
\(\cameramatrix{A}\) . If
|
|
REF: fisheye::CALIB_USE_INTRINSIC_GUESS is specified, some or all of fx, fy, cx, cy must be
|
|
initialized before calling the function.</dd>
|
|
<dd><code>D</code> - Output vector of distortion coefficients \(\distcoeffsfisheye\).</dd>
|
|
<dd><code>rvecs</code> - Output vector of rotation vectors (see REF: Rodrigues ) estimated for each pattern view.
|
|
That is, each k-th rotation vector together with the corresponding k-th translation vector (see
|
|
the next output parameter description) brings the calibration pattern from the model coordinate
|
|
space (in which object points are specified) to the world coordinate space, that is, a real
|
|
position of the calibration pattern in the k-th pattern view (k=0.. *M* -1).</dd>
|
|
<dd><code>tvecs</code> - Output vector of translation vectors estimated for each pattern view.
|
|
<ul>
|
|
<li>
|
|
REF: fisheye::CALIB_USE_INTRINSIC_GUESS cameraMatrix contains valid initial values of
|
|
fx, fy, cx, cy that are optimized further. Otherwise, (cx, cy) is initially set to the image
|
|
center ( imageSize is used), and focal distances are computed in a least-squares fashion.
|
|
</li>
|
|
<li>
|
|
REF: fisheye::CALIB_RECOMPUTE_EXTRINSIC Extrinsic will be recomputed after each iteration
|
|
of intrinsic optimization.
|
|
</li>
|
|
<li>
|
|
REF: fisheye::CALIB_CHECK_COND The functions will check validity of condition number.
|
|
</li>
|
|
<li>
|
|
REF: fisheye::CALIB_FIX_SKEW Skew coefficient (alpha) is set to zero and stay zero.
|
|
</li>
|
|
<li>
|
|
REF: fisheye::CALIB_FIX_K1,..., REF: fisheye::CALIB_FIX_K4 Selected distortion coefficients
|
|
are set to zeros and stay zero.
|
|
</li>
|
|
<li>
|
|
REF: fisheye::CALIB_FIX_PRINCIPAL_POINT The principal point is not changed during the global
|
|
optimization. It stays at the center or at a different location specified when REF: fisheye::CALIB_USE_INTRINSIC_GUESS is set too.
|
|
</li>
|
|
<li>
|
|
REF: fisheye::CALIB_FIX_FOCAL_LENGTH The focal length is not changed during the global
|
|
optimization. It is the \(max(width,height)/\pi\) or the provided \(f_x\), \(f_y\) when REF: fisheye::CALIB_USE_INTRINSIC_GUESS is set too.
|
|
</li>
|
|
</ul></dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_stereoRectify(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,org.opencv.core.Size,double,double)">
|
|
<h3>fisheye_stereoRectify</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">fisheye_stereoRectify</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D2,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> Q,
|
|
int flags,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> newImageSize,
|
|
double balance,
|
|
double fov_scale)</span></div>
|
|
<div class="block">Stereo rectification for fisheye camera model</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>K1</code> - First camera intrinsic matrix.</dd>
|
|
<dd><code>D1</code> - First camera distortion parameters.</dd>
|
|
<dd><code>K2</code> - Second camera intrinsic matrix.</dd>
|
|
<dd><code>D2</code> - Second camera distortion parameters.</dd>
|
|
<dd><code>imageSize</code> - Size of the image used for stereo calibration.</dd>
|
|
<dd><code>R</code> - Rotation matrix between the coordinate systems of the first and the second
|
|
cameras.</dd>
|
|
<dd><code>tvec</code> - Translation vector between coordinate systems of the cameras.</dd>
|
|
<dd><code>R1</code> - Output 3x3 rectification transform (rotation matrix) for the first camera.</dd>
|
|
<dd><code>R2</code> - Output 3x3 rectification transform (rotation matrix) for the second camera.</dd>
|
|
<dd><code>P1</code> - Output 3x4 projection matrix in the new (rectified) coordinate systems for the first
|
|
camera.</dd>
|
|
<dd><code>P2</code> - Output 3x4 projection matrix in the new (rectified) coordinate systems for the second
|
|
camera.</dd>
|
|
<dd><code>Q</code> - Output \(4 \times 4\) disparity-to-depth mapping matrix (see #reprojectImageTo3D ).</dd>
|
|
<dd><code>flags</code> - Operation flags that may be zero or REF: fisheye::CALIB_ZERO_DISPARITY . If the flag is set,
|
|
the function makes the principal points of each camera have the same pixel coordinates in the
|
|
rectified views. And if the flag is not set, the function may still shift the images in the
|
|
horizontal or vertical direction (depending on the orientation of epipolar lines) to maximize the
|
|
useful image area.</dd>
|
|
<dd><code>newImageSize</code> - New image resolution after rectification. The same size should be passed to
|
|
#initUndistortRectifyMap (see the stereo_calib.cpp sample in OpenCV samples directory). When (0,0)
|
|
is passed (default), it is set to the original imageSize . Setting it to larger value can help you
|
|
preserve details in the original image, especially when there is a big radial distortion.</dd>
|
|
<dd><code>balance</code> - Sets the new focal length in range between the min focal length and the max focal
|
|
length. Balance is in range of [0, 1].</dd>
|
|
<dd><code>fov_scale</code> - Divisor for new focal length.</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_stereoRectify(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,org.opencv.core.Size,double)">
|
|
<h3>fisheye_stereoRectify</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">fisheye_stereoRectify</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D2,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> Q,
|
|
int flags,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> newImageSize,
|
|
double balance)</span></div>
|
|
<div class="block">Stereo rectification for fisheye camera model</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>K1</code> - First camera intrinsic matrix.</dd>
|
|
<dd><code>D1</code> - First camera distortion parameters.</dd>
|
|
<dd><code>K2</code> - Second camera intrinsic matrix.</dd>
|
|
<dd><code>D2</code> - Second camera distortion parameters.</dd>
|
|
<dd><code>imageSize</code> - Size of the image used for stereo calibration.</dd>
|
|
<dd><code>R</code> - Rotation matrix between the coordinate systems of the first and the second
|
|
cameras.</dd>
|
|
<dd><code>tvec</code> - Translation vector between coordinate systems of the cameras.</dd>
|
|
<dd><code>R1</code> - Output 3x3 rectification transform (rotation matrix) for the first camera.</dd>
|
|
<dd><code>R2</code> - Output 3x3 rectification transform (rotation matrix) for the second camera.</dd>
|
|
<dd><code>P1</code> - Output 3x4 projection matrix in the new (rectified) coordinate systems for the first
|
|
camera.</dd>
|
|
<dd><code>P2</code> - Output 3x4 projection matrix in the new (rectified) coordinate systems for the second
|
|
camera.</dd>
|
|
<dd><code>Q</code> - Output \(4 \times 4\) disparity-to-depth mapping matrix (see #reprojectImageTo3D ).</dd>
|
|
<dd><code>flags</code> - Operation flags that may be zero or REF: fisheye::CALIB_ZERO_DISPARITY . If the flag is set,
|
|
the function makes the principal points of each camera have the same pixel coordinates in the
|
|
rectified views. And if the flag is not set, the function may still shift the images in the
|
|
horizontal or vertical direction (depending on the orientation of epipolar lines) to maximize the
|
|
useful image area.</dd>
|
|
<dd><code>newImageSize</code> - New image resolution after rectification. The same size should be passed to
|
|
#initUndistortRectifyMap (see the stereo_calib.cpp sample in OpenCV samples directory). When (0,0)
|
|
is passed (default), it is set to the original imageSize . Setting it to larger value can help you
|
|
preserve details in the original image, especially when there is a big radial distortion.</dd>
|
|
<dd><code>balance</code> - Sets the new focal length in range between the min focal length and the max focal
|
|
length. Balance is in range of [0, 1].</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_stereoRectify(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int,org.opencv.core.Size)">
|
|
<h3>fisheye_stereoRectify</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">fisheye_stereoRectify</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D2,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> Q,
|
|
int flags,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> newImageSize)</span></div>
|
|
<div class="block">Stereo rectification for fisheye camera model</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>K1</code> - First camera intrinsic matrix.</dd>
|
|
<dd><code>D1</code> - First camera distortion parameters.</dd>
|
|
<dd><code>K2</code> - Second camera intrinsic matrix.</dd>
|
|
<dd><code>D2</code> - Second camera distortion parameters.</dd>
|
|
<dd><code>imageSize</code> - Size of the image used for stereo calibration.</dd>
|
|
<dd><code>R</code> - Rotation matrix between the coordinate systems of the first and the second
|
|
cameras.</dd>
|
|
<dd><code>tvec</code> - Translation vector between coordinate systems of the cameras.</dd>
|
|
<dd><code>R1</code> - Output 3x3 rectification transform (rotation matrix) for the first camera.</dd>
|
|
<dd><code>R2</code> - Output 3x3 rectification transform (rotation matrix) for the second camera.</dd>
|
|
<dd><code>P1</code> - Output 3x4 projection matrix in the new (rectified) coordinate systems for the first
|
|
camera.</dd>
|
|
<dd><code>P2</code> - Output 3x4 projection matrix in the new (rectified) coordinate systems for the second
|
|
camera.</dd>
|
|
<dd><code>Q</code> - Output \(4 \times 4\) disparity-to-depth mapping matrix (see #reprojectImageTo3D ).</dd>
|
|
<dd><code>flags</code> - Operation flags that may be zero or REF: fisheye::CALIB_ZERO_DISPARITY . If the flag is set,
|
|
the function makes the principal points of each camera have the same pixel coordinates in the
|
|
rectified views. And if the flag is not set, the function may still shift the images in the
|
|
horizontal or vertical direction (depending on the orientation of epipolar lines) to maximize the
|
|
useful image area.</dd>
|
|
<dd><code>newImageSize</code> - New image resolution after rectification. The same size should be passed to
|
|
#initUndistortRectifyMap (see the stereo_calib.cpp sample in OpenCV samples directory). When (0,0)
|
|
is passed (default), it is set to the original imageSize . Setting it to larger value can help you
|
|
preserve details in the original image, especially when there is a big radial distortion.
|
|
length. Balance is in range of [0, 1].</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_stereoRectify(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,int)">
|
|
<h3>fisheye_stereoRectify</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">void</span> <span class="element-name">fisheye_stereoRectify</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D2,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> P2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> Q,
|
|
int flags)</span></div>
|
|
<div class="block">Stereo rectification for fisheye camera model</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>K1</code> - First camera intrinsic matrix.</dd>
|
|
<dd><code>D1</code> - First camera distortion parameters.</dd>
|
|
<dd><code>K2</code> - Second camera intrinsic matrix.</dd>
|
|
<dd><code>D2</code> - Second camera distortion parameters.</dd>
|
|
<dd><code>imageSize</code> - Size of the image used for stereo calibration.</dd>
|
|
<dd><code>R</code> - Rotation matrix between the coordinate systems of the first and the second
|
|
cameras.</dd>
|
|
<dd><code>tvec</code> - Translation vector between coordinate systems of the cameras.</dd>
|
|
<dd><code>R1</code> - Output 3x3 rectification transform (rotation matrix) for the first camera.</dd>
|
|
<dd><code>R2</code> - Output 3x3 rectification transform (rotation matrix) for the second camera.</dd>
|
|
<dd><code>P1</code> - Output 3x4 projection matrix in the new (rectified) coordinate systems for the first
|
|
camera.</dd>
|
|
<dd><code>P2</code> - Output 3x4 projection matrix in the new (rectified) coordinate systems for the second
|
|
camera.</dd>
|
|
<dd><code>Q</code> - Output \(4 \times 4\) disparity-to-depth mapping matrix (see #reprojectImageTo3D ).</dd>
|
|
<dd><code>flags</code> - Operation flags that may be zero or REF: fisheye::CALIB_ZERO_DISPARITY . If the flag is set,
|
|
the function makes the principal points of each camera have the same pixel coordinates in the
|
|
rectified views. And if the flag is not set, the function may still shift the images in the
|
|
horizontal or vertical direction (depending on the orientation of epipolar lines) to maximize the
|
|
useful image area.
|
|
#initUndistortRectifyMap (see the stereo_calib.cpp sample in OpenCV samples directory). When (0,0)
|
|
is passed (default), it is set to the original imageSize . Setting it to larger value can help you
|
|
preserve details in the original image, especially when there is a big radial distortion.
|
|
length. Balance is in range of [0, 1].</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_stereoCalibrate(java.util.List,java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List,int,org.opencv.core.TermCriteria)">
|
|
<h3>fisheye_stereoCalibrate</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">double</span> <span class="element-name">fisheye_stereoCalibrate</span><wbr><span class="parameters">(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints1,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D2,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> T,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs,
|
|
int flags,
|
|
<a href="../core/TermCriteria.html" title="class in org.opencv.core">TermCriteria</a> criteria)</span></div>
|
|
<div class="block">Performs stereo calibration</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - Vector of vectors of the calibration pattern points.</dd>
|
|
<dd><code>imagePoints1</code> - Vector of vectors of the projections of the calibration pattern points,
|
|
observed by the first camera.</dd>
|
|
<dd><code>imagePoints2</code> - Vector of vectors of the projections of the calibration pattern points,
|
|
observed by the second camera.</dd>
|
|
<dd><code>K1</code> - Input/output first camera intrinsic matrix:
|
|
\(\vecthreethree{f_x^{(j)}}{0}{c_x^{(j)}}{0}{f_y^{(j)}}{c_y^{(j)}}{0}{0}{1}\) , \(j = 0,\, 1\) . If
|
|
any of REF: fisheye::CALIB_USE_INTRINSIC_GUESS , REF: fisheye::CALIB_FIX_INTRINSIC are specified,
|
|
some or all of the matrix components must be initialized.</dd>
|
|
<dd><code>D1</code> - Input/output vector of distortion coefficients \(\distcoeffsfisheye\) of 4 elements.</dd>
|
|
<dd><code>K2</code> - Input/output second camera intrinsic matrix. The parameter is similar to K1 .</dd>
|
|
<dd><code>D2</code> - Input/output lens distortion coefficients for the second camera. The parameter is
|
|
similar to D1 .</dd>
|
|
<dd><code>imageSize</code> - Size of the image used only to initialize camera intrinsic matrix.</dd>
|
|
<dd><code>R</code> - Output rotation matrix between the 1st and the 2nd camera coordinate systems.</dd>
|
|
<dd><code>T</code> - Output translation vector between the coordinate systems of the cameras.</dd>
|
|
<dd><code>rvecs</code> - Output vector of rotation vectors ( REF: Rodrigues ) estimated for each pattern view in the
|
|
coordinate system of the first camera of the stereo pair (e.g. std::vector<cv::Mat>). More in detail, each
|
|
i-th rotation vector together with the corresponding i-th translation vector (see the next output parameter
|
|
description) brings the calibration pattern from the object coordinate space (in which object points are
|
|
specified) to the camera coordinate space of the first camera of the stereo pair. In more technical terms,
|
|
the tuple of the i-th rotation and translation vector performs a change of basis from object coordinate space
|
|
to camera coordinate space of the first camera of the stereo pair.</dd>
|
|
<dd><code>tvecs</code> - Output vector of translation vectors estimated for each pattern view, see parameter description
|
|
of previous output parameter ( rvecs ).</dd>
|
|
<dd><code>flags</code> - Different flags that may be zero or a combination of the following values:
|
|
<ul>
|
|
<li>
|
|
REF: fisheye::CALIB_FIX_INTRINSIC Fix K1, K2? and D1, D2? so that only R, T matrices
|
|
are estimated.
|
|
</li>
|
|
<li>
|
|
REF: fisheye::CALIB_USE_INTRINSIC_GUESS K1, K2 contains valid initial values of
|
|
fx, fy, cx, cy that are optimized further. Otherwise, (cx, cy) is initially set to the image
|
|
center (imageSize is used), and focal distances are computed in a least-squares fashion.
|
|
</li>
|
|
<li>
|
|
REF: fisheye::CALIB_RECOMPUTE_EXTRINSIC Extrinsic will be recomputed after each iteration
|
|
of intrinsic optimization.
|
|
</li>
|
|
<li>
|
|
REF: fisheye::CALIB_CHECK_COND The functions will check validity of condition number.
|
|
</li>
|
|
<li>
|
|
REF: fisheye::CALIB_FIX_SKEW Skew coefficient (alpha) is set to zero and stay zero.
|
|
</li>
|
|
<li>
|
|
REF: fisheye::CALIB_FIX_K1,..., REF: fisheye::CALIB_FIX_K4 Selected distortion coefficients are set to zeros and stay
|
|
zero.
|
|
</li>
|
|
</ul></dd>
|
|
<dd><code>criteria</code> - Termination criteria for the iterative optimization algorithm.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_stereoCalibrate(java.util.List,java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List,int)">
|
|
<h3>fisheye_stereoCalibrate</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">double</span> <span class="element-name">fisheye_stereoCalibrate</span><wbr><span class="parameters">(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints1,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D2,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> T,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs,
|
|
int flags)</span></div>
|
|
<div class="block">Performs stereo calibration</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - Vector of vectors of the calibration pattern points.</dd>
|
|
<dd><code>imagePoints1</code> - Vector of vectors of the projections of the calibration pattern points,
|
|
observed by the first camera.</dd>
|
|
<dd><code>imagePoints2</code> - Vector of vectors of the projections of the calibration pattern points,
|
|
observed by the second camera.</dd>
|
|
<dd><code>K1</code> - Input/output first camera intrinsic matrix:
|
|
\(\vecthreethree{f_x^{(j)}}{0}{c_x^{(j)}}{0}{f_y^{(j)}}{c_y^{(j)}}{0}{0}{1}\) , \(j = 0,\, 1\) . If
|
|
any of REF: fisheye::CALIB_USE_INTRINSIC_GUESS , REF: fisheye::CALIB_FIX_INTRINSIC are specified,
|
|
some or all of the matrix components must be initialized.</dd>
|
|
<dd><code>D1</code> - Input/output vector of distortion coefficients \(\distcoeffsfisheye\) of 4 elements.</dd>
|
|
<dd><code>K2</code> - Input/output second camera intrinsic matrix. The parameter is similar to K1 .</dd>
|
|
<dd><code>D2</code> - Input/output lens distortion coefficients for the second camera. The parameter is
|
|
similar to D1 .</dd>
|
|
<dd><code>imageSize</code> - Size of the image used only to initialize camera intrinsic matrix.</dd>
|
|
<dd><code>R</code> - Output rotation matrix between the 1st and the 2nd camera coordinate systems.</dd>
|
|
<dd><code>T</code> - Output translation vector between the coordinate systems of the cameras.</dd>
|
|
<dd><code>rvecs</code> - Output vector of rotation vectors ( REF: Rodrigues ) estimated for each pattern view in the
|
|
coordinate system of the first camera of the stereo pair (e.g. std::vector<cv::Mat>). More in detail, each
|
|
i-th rotation vector together with the corresponding i-th translation vector (see the next output parameter
|
|
description) brings the calibration pattern from the object coordinate space (in which object points are
|
|
specified) to the camera coordinate space of the first camera of the stereo pair. In more technical terms,
|
|
the tuple of the i-th rotation and translation vector performs a change of basis from object coordinate space
|
|
to camera coordinate space of the first camera of the stereo pair.</dd>
|
|
<dd><code>tvecs</code> - Output vector of translation vectors estimated for each pattern view, see parameter description
|
|
of previous output parameter ( rvecs ).</dd>
|
|
<dd><code>flags</code> - Different flags that may be zero or a combination of the following values:
|
|
<ul>
|
|
<li>
|
|
REF: fisheye::CALIB_FIX_INTRINSIC Fix K1, K2? and D1, D2? so that only R, T matrices
|
|
are estimated.
|
|
</li>
|
|
<li>
|
|
REF: fisheye::CALIB_USE_INTRINSIC_GUESS K1, K2 contains valid initial values of
|
|
fx, fy, cx, cy that are optimized further. Otherwise, (cx, cy) is initially set to the image
|
|
center (imageSize is used), and focal distances are computed in a least-squares fashion.
|
|
</li>
|
|
<li>
|
|
REF: fisheye::CALIB_RECOMPUTE_EXTRINSIC Extrinsic will be recomputed after each iteration
|
|
of intrinsic optimization.
|
|
</li>
|
|
<li>
|
|
REF: fisheye::CALIB_CHECK_COND The functions will check validity of condition number.
|
|
</li>
|
|
<li>
|
|
REF: fisheye::CALIB_FIX_SKEW Skew coefficient (alpha) is set to zero and stay zero.
|
|
</li>
|
|
<li>
|
|
REF: fisheye::CALIB_FIX_K1,..., REF: fisheye::CALIB_FIX_K4 Selected distortion coefficients are set to zeros and stay
|
|
zero.
|
|
</li>
|
|
</ul></dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_stereoCalibrate(java.util.List,java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,java.util.List,java.util.List)">
|
|
<h3>fisheye_stereoCalibrate</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">double</span> <span class="element-name">fisheye_stereoCalibrate</span><wbr><span class="parameters">(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints1,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D2,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> T,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> rvecs,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> tvecs)</span></div>
|
|
<div class="block">Performs stereo calibration</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - Vector of vectors of the calibration pattern points.</dd>
|
|
<dd><code>imagePoints1</code> - Vector of vectors of the projections of the calibration pattern points,
|
|
observed by the first camera.</dd>
|
|
<dd><code>imagePoints2</code> - Vector of vectors of the projections of the calibration pattern points,
|
|
observed by the second camera.</dd>
|
|
<dd><code>K1</code> - Input/output first camera intrinsic matrix:
|
|
\(\vecthreethree{f_x^{(j)}}{0}{c_x^{(j)}}{0}{f_y^{(j)}}{c_y^{(j)}}{0}{0}{1}\) , \(j = 0,\, 1\) . If
|
|
any of REF: fisheye::CALIB_USE_INTRINSIC_GUESS , REF: fisheye::CALIB_FIX_INTRINSIC are specified,
|
|
some or all of the matrix components must be initialized.</dd>
|
|
<dd><code>D1</code> - Input/output vector of distortion coefficients \(\distcoeffsfisheye\) of 4 elements.</dd>
|
|
<dd><code>K2</code> - Input/output second camera intrinsic matrix. The parameter is similar to K1 .</dd>
|
|
<dd><code>D2</code> - Input/output lens distortion coefficients for the second camera. The parameter is
|
|
similar to D1 .</dd>
|
|
<dd><code>imageSize</code> - Size of the image used only to initialize camera intrinsic matrix.</dd>
|
|
<dd><code>R</code> - Output rotation matrix between the 1st and the 2nd camera coordinate systems.</dd>
|
|
<dd><code>T</code> - Output translation vector between the coordinate systems of the cameras.</dd>
|
|
<dd><code>rvecs</code> - Output vector of rotation vectors ( REF: Rodrigues ) estimated for each pattern view in the
|
|
coordinate system of the first camera of the stereo pair (e.g. std::vector<cv::Mat>). More in detail, each
|
|
i-th rotation vector together with the corresponding i-th translation vector (see the next output parameter
|
|
description) brings the calibration pattern from the object coordinate space (in which object points are
|
|
specified) to the camera coordinate space of the first camera of the stereo pair. In more technical terms,
|
|
the tuple of the i-th rotation and translation vector performs a change of basis from object coordinate space
|
|
to camera coordinate space of the first camera of the stereo pair.</dd>
|
|
<dd><code>tvecs</code> - Output vector of translation vectors estimated for each pattern view, see parameter description
|
|
of previous output parameter ( rvecs ).
|
|
<ul>
|
|
<li>
|
|
REF: fisheye::CALIB_FIX_INTRINSIC Fix K1, K2? and D1, D2? so that only R, T matrices
|
|
are estimated.
|
|
</li>
|
|
<li>
|
|
REF: fisheye::CALIB_USE_INTRINSIC_GUESS K1, K2 contains valid initial values of
|
|
fx, fy, cx, cy that are optimized further. Otherwise, (cx, cy) is initially set to the image
|
|
center (imageSize is used), and focal distances are computed in a least-squares fashion.
|
|
</li>
|
|
<li>
|
|
REF: fisheye::CALIB_RECOMPUTE_EXTRINSIC Extrinsic will be recomputed after each iteration
|
|
of intrinsic optimization.
|
|
</li>
|
|
<li>
|
|
REF: fisheye::CALIB_CHECK_COND The functions will check validity of condition number.
|
|
</li>
|
|
<li>
|
|
REF: fisheye::CALIB_FIX_SKEW Skew coefficient (alpha) is set to zero and stay zero.
|
|
</li>
|
|
<li>
|
|
REF: fisheye::CALIB_FIX_K1,..., REF: fisheye::CALIB_FIX_K4 Selected distortion coefficients are set to zeros and stay
|
|
zero.
|
|
</li>
|
|
</ul></dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_stereoCalibrate(java.util.List,java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,int,org.opencv.core.TermCriteria)">
|
|
<h3>fisheye_stereoCalibrate</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">double</span> <span class="element-name">fisheye_stereoCalibrate</span><wbr><span class="parameters">(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints1,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D2,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> T,
|
|
int flags,
|
|
<a href="../core/TermCriteria.html" title="class in org.opencv.core">TermCriteria</a> criteria)</span></div>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_stereoCalibrate(java.util.List,java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat,int)">
|
|
<h3>fisheye_stereoCalibrate</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">double</span> <span class="element-name">fisheye_stereoCalibrate</span><wbr><span class="parameters">(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints1,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D2,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> T,
|
|
int flags)</span></div>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_stereoCalibrate(java.util.List,java.util.List,java.util.List,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Size,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>fisheye_stereoCalibrate</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">double</span> <span class="element-name">fisheye_stereoCalibrate</span><wbr><span class="parameters">(<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> objectPoints,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints1,
|
|
<a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/List.html" title="class or interface in java.util" class="external-link">List</a><<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a>> imagePoints2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D1,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> K2,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> D2,
|
|
<a href="../core/Size.html" title="class in org.opencv.core">Size</a> imageSize,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> R,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> T)</span></div>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_solvePnP(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,boolean,int,org.opencv.core.TermCriteria)">
|
|
<h3>fisheye_solvePnP</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">boolean</span> <span class="element-name">fisheye_solvePnP</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
boolean useExtrinsicGuess,
|
|
int flags,
|
|
<a href="../core/TermCriteria.html" title="class in org.opencv.core">TermCriteria</a> criteria)</span></div>
|
|
<div class="block">Finds an object pose from 3D-2D point correspondences for fisheye camera moodel.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - Array of object points in the object coordinate space, Nx3 1-channel or
|
|
1xN/Nx1 3-channel, where N is the number of points. vector<Point3d> can also be passed here.</dd>
|
|
<dd><code>imagePoints</code> - Array of corresponding image points, Nx2 1-channel or 1xN/Nx1 2-channel,
|
|
where N is the number of points. vector<Point2d> can also be passed here.</dd>
|
|
<dd><code>cameraMatrix</code> - Input camera intrinsic matrix \(\cameramatrix{A}\) .</dd>
|
|
<dd><code>distCoeffs</code> - Input vector of distortion coefficients (4x1/1x4).</dd>
|
|
<dd><code>rvec</code> - Output rotation vector (see REF: Rodrigues ) that, together with tvec, brings points from
|
|
the model coordinate system to the camera coordinate system.</dd>
|
|
<dd><code>tvec</code> - Output translation vector.</dd>
|
|
<dd><code>useExtrinsicGuess</code> - Parameter used for #SOLVEPNP_ITERATIVE. If true (1), the function uses
|
|
the provided rvec and tvec values as initial approximations of the rotation and translation
|
|
vectors, respectively, and further optimizes them.</dd>
|
|
<dd><code>flags</code> - Method for solving a PnP problem: see REF: calib3d_solvePnP_flags</dd>
|
|
<dd><code>criteria</code> - Termination criteria for internal undistortPoints call.
|
|
The function interally undistorts points with REF: undistortPoints and call REF: cv::solvePnP,
|
|
thus the input are very similar. More information about Perspective-n-Points is described in REF: calib3d_solvePnP
|
|
for more information.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_solvePnP(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,boolean,int)">
|
|
<h3>fisheye_solvePnP</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">boolean</span> <span class="element-name">fisheye_solvePnP</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
boolean useExtrinsicGuess,
|
|
int flags)</span></div>
|
|
<div class="block">Finds an object pose from 3D-2D point correspondences for fisheye camera moodel.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - Array of object points in the object coordinate space, Nx3 1-channel or
|
|
1xN/Nx1 3-channel, where N is the number of points. vector<Point3d> can also be passed here.</dd>
|
|
<dd><code>imagePoints</code> - Array of corresponding image points, Nx2 1-channel or 1xN/Nx1 2-channel,
|
|
where N is the number of points. vector<Point2d> can also be passed here.</dd>
|
|
<dd><code>cameraMatrix</code> - Input camera intrinsic matrix \(\cameramatrix{A}\) .</dd>
|
|
<dd><code>distCoeffs</code> - Input vector of distortion coefficients (4x1/1x4).</dd>
|
|
<dd><code>rvec</code> - Output rotation vector (see REF: Rodrigues ) that, together with tvec, brings points from
|
|
the model coordinate system to the camera coordinate system.</dd>
|
|
<dd><code>tvec</code> - Output translation vector.</dd>
|
|
<dd><code>useExtrinsicGuess</code> - Parameter used for #SOLVEPNP_ITERATIVE. If true (1), the function uses
|
|
the provided rvec and tvec values as initial approximations of the rotation and translation
|
|
vectors, respectively, and further optimizes them.</dd>
|
|
<dd><code>flags</code> - Method for solving a PnP problem: see REF: calib3d_solvePnP_flags
|
|
The function interally undistorts points with REF: undistortPoints and call REF: cv::solvePnP,
|
|
thus the input are very similar. More information about Perspective-n-Points is described in REF: calib3d_solvePnP
|
|
for more information.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_solvePnP(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,boolean)">
|
|
<h3>fisheye_solvePnP</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">boolean</span> <span class="element-name">fisheye_solvePnP</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
boolean useExtrinsicGuess)</span></div>
|
|
<div class="block">Finds an object pose from 3D-2D point correspondences for fisheye camera moodel.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - Array of object points in the object coordinate space, Nx3 1-channel or
|
|
1xN/Nx1 3-channel, where N is the number of points. vector<Point3d> can also be passed here.</dd>
|
|
<dd><code>imagePoints</code> - Array of corresponding image points, Nx2 1-channel or 1xN/Nx1 2-channel,
|
|
where N is the number of points. vector<Point2d> can also be passed here.</dd>
|
|
<dd><code>cameraMatrix</code> - Input camera intrinsic matrix \(\cameramatrix{A}\) .</dd>
|
|
<dd><code>distCoeffs</code> - Input vector of distortion coefficients (4x1/1x4).</dd>
|
|
<dd><code>rvec</code> - Output rotation vector (see REF: Rodrigues ) that, together with tvec, brings points from
|
|
the model coordinate system to the camera coordinate system.</dd>
|
|
<dd><code>tvec</code> - Output translation vector.</dd>
|
|
<dd><code>useExtrinsicGuess</code> - Parameter used for #SOLVEPNP_ITERATIVE. If true (1), the function uses
|
|
the provided rvec and tvec values as initial approximations of the rotation and translation
|
|
vectors, respectively, and further optimizes them.
|
|
The function interally undistorts points with REF: undistortPoints and call REF: cv::solvePnP,
|
|
thus the input are very similar. More information about Perspective-n-Points is described in REF: calib3d_solvePnP
|
|
for more information.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_solvePnP(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>fisheye_solvePnP</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">boolean</span> <span class="element-name">fisheye_solvePnP</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec)</span></div>
|
|
<div class="block">Finds an object pose from 3D-2D point correspondences for fisheye camera moodel.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - Array of object points in the object coordinate space, Nx3 1-channel or
|
|
1xN/Nx1 3-channel, where N is the number of points. vector<Point3d> can also be passed here.</dd>
|
|
<dd><code>imagePoints</code> - Array of corresponding image points, Nx2 1-channel or 1xN/Nx1 2-channel,
|
|
where N is the number of points. vector<Point2d> can also be passed here.</dd>
|
|
<dd><code>cameraMatrix</code> - Input camera intrinsic matrix \(\cameramatrix{A}\) .</dd>
|
|
<dd><code>distCoeffs</code> - Input vector of distortion coefficients (4x1/1x4).</dd>
|
|
<dd><code>rvec</code> - Output rotation vector (see REF: Rodrigues ) that, together with tvec, brings points from
|
|
the model coordinate system to the camera coordinate system.</dd>
|
|
<dd><code>tvec</code> - Output translation vector.
|
|
the provided rvec and tvec values as initial approximations of the rotation and translation
|
|
vectors, respectively, and further optimizes them.
|
|
The function interally undistorts points with REF: undistortPoints and call REF: cv::solvePnP,
|
|
thus the input are very similar. More information about Perspective-n-Points is described in REF: calib3d_solvePnP
|
|
for more information.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_solvePnPRansac(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,boolean,int,float,double,org.opencv.core.Mat,int,org.opencv.core.TermCriteria)">
|
|
<h3>fisheye_solvePnPRansac</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">boolean</span> <span class="element-name">fisheye_solvePnPRansac</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
boolean useExtrinsicGuess,
|
|
int iterationsCount,
|
|
float reprojectionError,
|
|
double confidence,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> inliers,
|
|
int flags,
|
|
<a href="../core/TermCriteria.html" title="class in org.opencv.core">TermCriteria</a> criteria)</span></div>
|
|
<div class="block">Finds an object pose from 3D-2D point correspondences using the RANSAC scheme for fisheye camera moodel.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - Array of object points in the object coordinate space, Nx3 1-channel or
|
|
1xN/Nx1 3-channel, where N is the number of points. vector<Point3d> can be also passed here.</dd>
|
|
<dd><code>imagePoints</code> - Array of corresponding image points, Nx2 1-channel or 1xN/Nx1 2-channel,
|
|
where N is the number of points. vector<Point2d> can be also passed here.</dd>
|
|
<dd><code>cameraMatrix</code> - Input camera intrinsic matrix \(\cameramatrix{A}\) .</dd>
|
|
<dd><code>distCoeffs</code> - Input vector of distortion coefficients (4x1/1x4).</dd>
|
|
<dd><code>rvec</code> - Output rotation vector (see REF: Rodrigues ) that, together with tvec, brings points from
|
|
the model coordinate system to the camera coordinate system.</dd>
|
|
<dd><code>tvec</code> - Output translation vector.</dd>
|
|
<dd><code>useExtrinsicGuess</code> - Parameter used for #SOLVEPNP_ITERATIVE. If true (1), the function uses
|
|
the provided rvec and tvec values as initial approximations of the rotation and translation
|
|
vectors, respectively, and further optimizes them.</dd>
|
|
<dd><code>iterationsCount</code> - Number of iterations.</dd>
|
|
<dd><code>reprojectionError</code> - Inlier threshold value used by the RANSAC procedure. The parameter value
|
|
is the maximum allowed distance between the observed and computed point projections to consider it
|
|
an inlier.</dd>
|
|
<dd><code>confidence</code> - The probability that the algorithm produces a useful result.</dd>
|
|
<dd><code>inliers</code> - Output vector that contains indices of inliers in objectPoints and imagePoints .</dd>
|
|
<dd><code>flags</code> - Method for solving a PnP problem: see REF: calib3d_solvePnP_flags
|
|
This function returns the rotation and the translation vectors that transform a 3D point expressed in the object
|
|
coordinate frame to the camera coordinate frame, using different methods:
|
|
<ul>
|
|
<li>
|
|
P3P methods (REF: SOLVEPNP_P3P, REF: SOLVEPNP_AP3P): need 4 input points to return a unique solution.
|
|
</li>
|
|
<li>
|
|
REF: SOLVEPNP_IPPE Input points must be >= 4 and object points must be coplanar.
|
|
</li>
|
|
<li>
|
|
REF: SOLVEPNP_IPPE_SQUARE Special case suitable for marker pose estimation.
|
|
Number of input points must be 4. Object points must be defined in the following order:
|
|
</li>
|
|
<li>
|
|
point 0: [-squareLength / 2, squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 1: [ squareLength / 2, squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 2: [ squareLength / 2, -squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 3: [-squareLength / 2, -squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
for all the other flags, number of input points must be >= 4 and object points can be in any configuration.
|
|
</li>
|
|
</ul></dd>
|
|
<dd><code>criteria</code> - Termination criteria for internal undistortPoints call.
|
|
The function interally undistorts points with REF: undistortPoints and call REF: cv::solvePnP,
|
|
thus the input are very similar. More information about Perspective-n-Points is described in REF: calib3d_solvePnP
|
|
for more information.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_solvePnPRansac(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,boolean,int,float,double,org.opencv.core.Mat,int)">
|
|
<h3>fisheye_solvePnPRansac</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">boolean</span> <span class="element-name">fisheye_solvePnPRansac</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
boolean useExtrinsicGuess,
|
|
int iterationsCount,
|
|
float reprojectionError,
|
|
double confidence,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> inliers,
|
|
int flags)</span></div>
|
|
<div class="block">Finds an object pose from 3D-2D point correspondences using the RANSAC scheme for fisheye camera moodel.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - Array of object points in the object coordinate space, Nx3 1-channel or
|
|
1xN/Nx1 3-channel, where N is the number of points. vector<Point3d> can be also passed here.</dd>
|
|
<dd><code>imagePoints</code> - Array of corresponding image points, Nx2 1-channel or 1xN/Nx1 2-channel,
|
|
where N is the number of points. vector<Point2d> can be also passed here.</dd>
|
|
<dd><code>cameraMatrix</code> - Input camera intrinsic matrix \(\cameramatrix{A}\) .</dd>
|
|
<dd><code>distCoeffs</code> - Input vector of distortion coefficients (4x1/1x4).</dd>
|
|
<dd><code>rvec</code> - Output rotation vector (see REF: Rodrigues ) that, together with tvec, brings points from
|
|
the model coordinate system to the camera coordinate system.</dd>
|
|
<dd><code>tvec</code> - Output translation vector.</dd>
|
|
<dd><code>useExtrinsicGuess</code> - Parameter used for #SOLVEPNP_ITERATIVE. If true (1), the function uses
|
|
the provided rvec and tvec values as initial approximations of the rotation and translation
|
|
vectors, respectively, and further optimizes them.</dd>
|
|
<dd><code>iterationsCount</code> - Number of iterations.</dd>
|
|
<dd><code>reprojectionError</code> - Inlier threshold value used by the RANSAC procedure. The parameter value
|
|
is the maximum allowed distance between the observed and computed point projections to consider it
|
|
an inlier.</dd>
|
|
<dd><code>confidence</code> - The probability that the algorithm produces a useful result.</dd>
|
|
<dd><code>inliers</code> - Output vector that contains indices of inliers in objectPoints and imagePoints .</dd>
|
|
<dd><code>flags</code> - Method for solving a PnP problem: see REF: calib3d_solvePnP_flags
|
|
This function returns the rotation and the translation vectors that transform a 3D point expressed in the object
|
|
coordinate frame to the camera coordinate frame, using different methods:
|
|
<ul>
|
|
<li>
|
|
P3P methods (REF: SOLVEPNP_P3P, REF: SOLVEPNP_AP3P): need 4 input points to return a unique solution.
|
|
</li>
|
|
<li>
|
|
REF: SOLVEPNP_IPPE Input points must be >= 4 and object points must be coplanar.
|
|
</li>
|
|
<li>
|
|
REF: SOLVEPNP_IPPE_SQUARE Special case suitable for marker pose estimation.
|
|
Number of input points must be 4. Object points must be defined in the following order:
|
|
</li>
|
|
<li>
|
|
point 0: [-squareLength / 2, squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 1: [ squareLength / 2, squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 2: [ squareLength / 2, -squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 3: [-squareLength / 2, -squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
for all the other flags, number of input points must be >= 4 and object points can be in any configuration.
|
|
</li>
|
|
</ul>
|
|
The function interally undistorts points with REF: undistortPoints and call REF: cv::solvePnP,
|
|
thus the input are very similar. More information about Perspective-n-Points is described in REF: calib3d_solvePnP
|
|
for more information.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_solvePnPRansac(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,boolean,int,float,double,org.opencv.core.Mat)">
|
|
<h3>fisheye_solvePnPRansac</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">boolean</span> <span class="element-name">fisheye_solvePnPRansac</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
boolean useExtrinsicGuess,
|
|
int iterationsCount,
|
|
float reprojectionError,
|
|
double confidence,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> inliers)</span></div>
|
|
<div class="block">Finds an object pose from 3D-2D point correspondences using the RANSAC scheme for fisheye camera moodel.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - Array of object points in the object coordinate space, Nx3 1-channel or
|
|
1xN/Nx1 3-channel, where N is the number of points. vector<Point3d> can be also passed here.</dd>
|
|
<dd><code>imagePoints</code> - Array of corresponding image points, Nx2 1-channel or 1xN/Nx1 2-channel,
|
|
where N is the number of points. vector<Point2d> can be also passed here.</dd>
|
|
<dd><code>cameraMatrix</code> - Input camera intrinsic matrix \(\cameramatrix{A}\) .</dd>
|
|
<dd><code>distCoeffs</code> - Input vector of distortion coefficients (4x1/1x4).</dd>
|
|
<dd><code>rvec</code> - Output rotation vector (see REF: Rodrigues ) that, together with tvec, brings points from
|
|
the model coordinate system to the camera coordinate system.</dd>
|
|
<dd><code>tvec</code> - Output translation vector.</dd>
|
|
<dd><code>useExtrinsicGuess</code> - Parameter used for #SOLVEPNP_ITERATIVE. If true (1), the function uses
|
|
the provided rvec and tvec values as initial approximations of the rotation and translation
|
|
vectors, respectively, and further optimizes them.</dd>
|
|
<dd><code>iterationsCount</code> - Number of iterations.</dd>
|
|
<dd><code>reprojectionError</code> - Inlier threshold value used by the RANSAC procedure. The parameter value
|
|
is the maximum allowed distance between the observed and computed point projections to consider it
|
|
an inlier.</dd>
|
|
<dd><code>confidence</code> - The probability that the algorithm produces a useful result.</dd>
|
|
<dd><code>inliers</code> - Output vector that contains indices of inliers in objectPoints and imagePoints .
|
|
This function returns the rotation and the translation vectors that transform a 3D point expressed in the object
|
|
coordinate frame to the camera coordinate frame, using different methods:
|
|
<ul>
|
|
<li>
|
|
P3P methods (REF: SOLVEPNP_P3P, REF: SOLVEPNP_AP3P): need 4 input points to return a unique solution.
|
|
</li>
|
|
<li>
|
|
REF: SOLVEPNP_IPPE Input points must be >= 4 and object points must be coplanar.
|
|
</li>
|
|
<li>
|
|
REF: SOLVEPNP_IPPE_SQUARE Special case suitable for marker pose estimation.
|
|
Number of input points must be 4. Object points must be defined in the following order:
|
|
</li>
|
|
<li>
|
|
point 0: [-squareLength / 2, squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 1: [ squareLength / 2, squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 2: [ squareLength / 2, -squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 3: [-squareLength / 2, -squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
for all the other flags, number of input points must be >= 4 and object points can be in any configuration.
|
|
</li>
|
|
</ul>
|
|
The function interally undistorts points with REF: undistortPoints and call REF: cv::solvePnP,
|
|
thus the input are very similar. More information about Perspective-n-Points is described in REF: calib3d_solvePnP
|
|
for more information.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_solvePnPRansac(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,boolean,int,float,double)">
|
|
<h3>fisheye_solvePnPRansac</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">boolean</span> <span class="element-name">fisheye_solvePnPRansac</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
boolean useExtrinsicGuess,
|
|
int iterationsCount,
|
|
float reprojectionError,
|
|
double confidence)</span></div>
|
|
<div class="block">Finds an object pose from 3D-2D point correspondences using the RANSAC scheme for fisheye camera moodel.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - Array of object points in the object coordinate space, Nx3 1-channel or
|
|
1xN/Nx1 3-channel, where N is the number of points. vector<Point3d> can be also passed here.</dd>
|
|
<dd><code>imagePoints</code> - Array of corresponding image points, Nx2 1-channel or 1xN/Nx1 2-channel,
|
|
where N is the number of points. vector<Point2d> can be also passed here.</dd>
|
|
<dd><code>cameraMatrix</code> - Input camera intrinsic matrix \(\cameramatrix{A}\) .</dd>
|
|
<dd><code>distCoeffs</code> - Input vector of distortion coefficients (4x1/1x4).</dd>
|
|
<dd><code>rvec</code> - Output rotation vector (see REF: Rodrigues ) that, together with tvec, brings points from
|
|
the model coordinate system to the camera coordinate system.</dd>
|
|
<dd><code>tvec</code> - Output translation vector.</dd>
|
|
<dd><code>useExtrinsicGuess</code> - Parameter used for #SOLVEPNP_ITERATIVE. If true (1), the function uses
|
|
the provided rvec and tvec values as initial approximations of the rotation and translation
|
|
vectors, respectively, and further optimizes them.</dd>
|
|
<dd><code>iterationsCount</code> - Number of iterations.</dd>
|
|
<dd><code>reprojectionError</code> - Inlier threshold value used by the RANSAC procedure. The parameter value
|
|
is the maximum allowed distance between the observed and computed point projections to consider it
|
|
an inlier.</dd>
|
|
<dd><code>confidence</code> - The probability that the algorithm produces a useful result.
|
|
This function returns the rotation and the translation vectors that transform a 3D point expressed in the object
|
|
coordinate frame to the camera coordinate frame, using different methods:
|
|
<ul>
|
|
<li>
|
|
P3P methods (REF: SOLVEPNP_P3P, REF: SOLVEPNP_AP3P): need 4 input points to return a unique solution.
|
|
</li>
|
|
<li>
|
|
REF: SOLVEPNP_IPPE Input points must be >= 4 and object points must be coplanar.
|
|
</li>
|
|
<li>
|
|
REF: SOLVEPNP_IPPE_SQUARE Special case suitable for marker pose estimation.
|
|
Number of input points must be 4. Object points must be defined in the following order:
|
|
</li>
|
|
<li>
|
|
point 0: [-squareLength / 2, squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 1: [ squareLength / 2, squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 2: [ squareLength / 2, -squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 3: [-squareLength / 2, -squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
for all the other flags, number of input points must be >= 4 and object points can be in any configuration.
|
|
</li>
|
|
</ul>
|
|
The function interally undistorts points with REF: undistortPoints and call REF: cv::solvePnP,
|
|
thus the input are very similar. More information about Perspective-n-Points is described in REF: calib3d_solvePnP
|
|
for more information.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_solvePnPRansac(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,boolean,int,float)">
|
|
<h3>fisheye_solvePnPRansac</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">boolean</span> <span class="element-name">fisheye_solvePnPRansac</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
boolean useExtrinsicGuess,
|
|
int iterationsCount,
|
|
float reprojectionError)</span></div>
|
|
<div class="block">Finds an object pose from 3D-2D point correspondences using the RANSAC scheme for fisheye camera moodel.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - Array of object points in the object coordinate space, Nx3 1-channel or
|
|
1xN/Nx1 3-channel, where N is the number of points. vector<Point3d> can be also passed here.</dd>
|
|
<dd><code>imagePoints</code> - Array of corresponding image points, Nx2 1-channel or 1xN/Nx1 2-channel,
|
|
where N is the number of points. vector<Point2d> can be also passed here.</dd>
|
|
<dd><code>cameraMatrix</code> - Input camera intrinsic matrix \(\cameramatrix{A}\) .</dd>
|
|
<dd><code>distCoeffs</code> - Input vector of distortion coefficients (4x1/1x4).</dd>
|
|
<dd><code>rvec</code> - Output rotation vector (see REF: Rodrigues ) that, together with tvec, brings points from
|
|
the model coordinate system to the camera coordinate system.</dd>
|
|
<dd><code>tvec</code> - Output translation vector.</dd>
|
|
<dd><code>useExtrinsicGuess</code> - Parameter used for #SOLVEPNP_ITERATIVE. If true (1), the function uses
|
|
the provided rvec and tvec values as initial approximations of the rotation and translation
|
|
vectors, respectively, and further optimizes them.</dd>
|
|
<dd><code>iterationsCount</code> - Number of iterations.</dd>
|
|
<dd><code>reprojectionError</code> - Inlier threshold value used by the RANSAC procedure. The parameter value
|
|
is the maximum allowed distance between the observed and computed point projections to consider it
|
|
an inlier.
|
|
This function returns the rotation and the translation vectors that transform a 3D point expressed in the object
|
|
coordinate frame to the camera coordinate frame, using different methods:
|
|
<ul>
|
|
<li>
|
|
P3P methods (REF: SOLVEPNP_P3P, REF: SOLVEPNP_AP3P): need 4 input points to return a unique solution.
|
|
</li>
|
|
<li>
|
|
REF: SOLVEPNP_IPPE Input points must be >= 4 and object points must be coplanar.
|
|
</li>
|
|
<li>
|
|
REF: SOLVEPNP_IPPE_SQUARE Special case suitable for marker pose estimation.
|
|
Number of input points must be 4. Object points must be defined in the following order:
|
|
</li>
|
|
<li>
|
|
point 0: [-squareLength / 2, squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 1: [ squareLength / 2, squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 2: [ squareLength / 2, -squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 3: [-squareLength / 2, -squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
for all the other flags, number of input points must be >= 4 and object points can be in any configuration.
|
|
</li>
|
|
</ul>
|
|
The function interally undistorts points with REF: undistortPoints and call REF: cv::solvePnP,
|
|
thus the input are very similar. More information about Perspective-n-Points is described in REF: calib3d_solvePnP
|
|
for more information.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_solvePnPRansac(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,boolean,int)">
|
|
<h3>fisheye_solvePnPRansac</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">boolean</span> <span class="element-name">fisheye_solvePnPRansac</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
boolean useExtrinsicGuess,
|
|
int iterationsCount)</span></div>
|
|
<div class="block">Finds an object pose from 3D-2D point correspondences using the RANSAC scheme for fisheye camera moodel.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - Array of object points in the object coordinate space, Nx3 1-channel or
|
|
1xN/Nx1 3-channel, where N is the number of points. vector<Point3d> can be also passed here.</dd>
|
|
<dd><code>imagePoints</code> - Array of corresponding image points, Nx2 1-channel or 1xN/Nx1 2-channel,
|
|
where N is the number of points. vector<Point2d> can be also passed here.</dd>
|
|
<dd><code>cameraMatrix</code> - Input camera intrinsic matrix \(\cameramatrix{A}\) .</dd>
|
|
<dd><code>distCoeffs</code> - Input vector of distortion coefficients (4x1/1x4).</dd>
|
|
<dd><code>rvec</code> - Output rotation vector (see REF: Rodrigues ) that, together with tvec, brings points from
|
|
the model coordinate system to the camera coordinate system.</dd>
|
|
<dd><code>tvec</code> - Output translation vector.</dd>
|
|
<dd><code>useExtrinsicGuess</code> - Parameter used for #SOLVEPNP_ITERATIVE. If true (1), the function uses
|
|
the provided rvec and tvec values as initial approximations of the rotation and translation
|
|
vectors, respectively, and further optimizes them.</dd>
|
|
<dd><code>iterationsCount</code> - Number of iterations.
|
|
is the maximum allowed distance between the observed and computed point projections to consider it
|
|
an inlier.
|
|
This function returns the rotation and the translation vectors that transform a 3D point expressed in the object
|
|
coordinate frame to the camera coordinate frame, using different methods:
|
|
<ul>
|
|
<li>
|
|
P3P methods (REF: SOLVEPNP_P3P, REF: SOLVEPNP_AP3P): need 4 input points to return a unique solution.
|
|
</li>
|
|
<li>
|
|
REF: SOLVEPNP_IPPE Input points must be >= 4 and object points must be coplanar.
|
|
</li>
|
|
<li>
|
|
REF: SOLVEPNP_IPPE_SQUARE Special case suitable for marker pose estimation.
|
|
Number of input points must be 4. Object points must be defined in the following order:
|
|
</li>
|
|
<li>
|
|
point 0: [-squareLength / 2, squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 1: [ squareLength / 2, squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 2: [ squareLength / 2, -squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 3: [-squareLength / 2, -squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
for all the other flags, number of input points must be >= 4 and object points can be in any configuration.
|
|
</li>
|
|
</ul>
|
|
The function interally undistorts points with REF: undistortPoints and call REF: cv::solvePnP,
|
|
thus the input are very similar. More information about Perspective-n-Points is described in REF: calib3d_solvePnP
|
|
for more information.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_solvePnPRansac(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,boolean)">
|
|
<h3>fisheye_solvePnPRansac</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">boolean</span> <span class="element-name">fisheye_solvePnPRansac</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec,
|
|
boolean useExtrinsicGuess)</span></div>
|
|
<div class="block">Finds an object pose from 3D-2D point correspondences using the RANSAC scheme for fisheye camera moodel.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - Array of object points in the object coordinate space, Nx3 1-channel or
|
|
1xN/Nx1 3-channel, where N is the number of points. vector<Point3d> can be also passed here.</dd>
|
|
<dd><code>imagePoints</code> - Array of corresponding image points, Nx2 1-channel or 1xN/Nx1 2-channel,
|
|
where N is the number of points. vector<Point2d> can be also passed here.</dd>
|
|
<dd><code>cameraMatrix</code> - Input camera intrinsic matrix \(\cameramatrix{A}\) .</dd>
|
|
<dd><code>distCoeffs</code> - Input vector of distortion coefficients (4x1/1x4).</dd>
|
|
<dd><code>rvec</code> - Output rotation vector (see REF: Rodrigues ) that, together with tvec, brings points from
|
|
the model coordinate system to the camera coordinate system.</dd>
|
|
<dd><code>tvec</code> - Output translation vector.</dd>
|
|
<dd><code>useExtrinsicGuess</code> - Parameter used for #SOLVEPNP_ITERATIVE. If true (1), the function uses
|
|
the provided rvec and tvec values as initial approximations of the rotation and translation
|
|
vectors, respectively, and further optimizes them.
|
|
is the maximum allowed distance between the observed and computed point projections to consider it
|
|
an inlier.
|
|
This function returns the rotation and the translation vectors that transform a 3D point expressed in the object
|
|
coordinate frame to the camera coordinate frame, using different methods:
|
|
<ul>
|
|
<li>
|
|
P3P methods (REF: SOLVEPNP_P3P, REF: SOLVEPNP_AP3P): need 4 input points to return a unique solution.
|
|
</li>
|
|
<li>
|
|
REF: SOLVEPNP_IPPE Input points must be >= 4 and object points must be coplanar.
|
|
</li>
|
|
<li>
|
|
REF: SOLVEPNP_IPPE_SQUARE Special case suitable for marker pose estimation.
|
|
Number of input points must be 4. Object points must be defined in the following order:
|
|
</li>
|
|
<li>
|
|
point 0: [-squareLength / 2, squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 1: [ squareLength / 2, squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 2: [ squareLength / 2, -squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 3: [-squareLength / 2, -squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
for all the other flags, number of input points must be >= 4 and object points can be in any configuration.
|
|
</li>
|
|
</ul>
|
|
The function interally undistorts points with REF: undistortPoints and call REF: cv::solvePnP,
|
|
thus the input are very similar. More information about Perspective-n-Points is described in REF: calib3d_solvePnP
|
|
for more information.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
<li>
|
|
<section class="detail" id="fisheye_solvePnPRansac(org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat,org.opencv.core.Mat)">
|
|
<h3>fisheye_solvePnPRansac</h3>
|
|
<div class="member-signature"><span class="modifiers">public static</span> <span class="return-type">boolean</span> <span class="element-name">fisheye_solvePnPRansac</span><wbr><span class="parameters">(<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> objectPoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> imagePoints,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> cameraMatrix,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> distCoeffs,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> rvec,
|
|
<a href="../core/Mat.html" title="class in org.opencv.core">Mat</a> tvec)</span></div>
|
|
<div class="block">Finds an object pose from 3D-2D point correspondences using the RANSAC scheme for fisheye camera moodel.</div>
|
|
<dl class="notes">
|
|
<dt>Parameters:</dt>
|
|
<dd><code>objectPoints</code> - Array of object points in the object coordinate space, Nx3 1-channel or
|
|
1xN/Nx1 3-channel, where N is the number of points. vector<Point3d> can be also passed here.</dd>
|
|
<dd><code>imagePoints</code> - Array of corresponding image points, Nx2 1-channel or 1xN/Nx1 2-channel,
|
|
where N is the number of points. vector<Point2d> can be also passed here.</dd>
|
|
<dd><code>cameraMatrix</code> - Input camera intrinsic matrix \(\cameramatrix{A}\) .</dd>
|
|
<dd><code>distCoeffs</code> - Input vector of distortion coefficients (4x1/1x4).</dd>
|
|
<dd><code>rvec</code> - Output rotation vector (see REF: Rodrigues ) that, together with tvec, brings points from
|
|
the model coordinate system to the camera coordinate system.</dd>
|
|
<dd><code>tvec</code> - Output translation vector.
|
|
the provided rvec and tvec values as initial approximations of the rotation and translation
|
|
vectors, respectively, and further optimizes them.
|
|
is the maximum allowed distance between the observed and computed point projections to consider it
|
|
an inlier.
|
|
This function returns the rotation and the translation vectors that transform a 3D point expressed in the object
|
|
coordinate frame to the camera coordinate frame, using different methods:
|
|
<ul>
|
|
<li>
|
|
P3P methods (REF: SOLVEPNP_P3P, REF: SOLVEPNP_AP3P): need 4 input points to return a unique solution.
|
|
</li>
|
|
<li>
|
|
REF: SOLVEPNP_IPPE Input points must be >= 4 and object points must be coplanar.
|
|
</li>
|
|
<li>
|
|
REF: SOLVEPNP_IPPE_SQUARE Special case suitable for marker pose estimation.
|
|
Number of input points must be 4. Object points must be defined in the following order:
|
|
</li>
|
|
<li>
|
|
point 0: [-squareLength / 2, squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 1: [ squareLength / 2, squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 2: [ squareLength / 2, -squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
point 3: [-squareLength / 2, -squareLength / 2, 0]
|
|
</li>
|
|
<li>
|
|
for all the other flags, number of input points must be >= 4 and object points can be in any configuration.
|
|
</li>
|
|
</ul>
|
|
The function interally undistorts points with REF: undistortPoints and call REF: cv::solvePnP,
|
|
thus the input are very similar. More information about Perspective-n-Points is described in REF: calib3d_solvePnP
|
|
for more information.</dd>
|
|
<dt>Returns:</dt>
|
|
<dd>automatically generated</dd>
|
|
</dl>
|
|
</section>
|
|
</li>
|
|
</ul>
|
|
</section>
|
|
</li>
|
|
</ul>
|
|
</section>
|
|
<!-- ========= END OF CLASS DATA ========= -->
|
|
</main>
|
|
<footer role="contentinfo">
|
|
<hr>
|
|
<p class="legal-copy"><small>Generated on 2025-07-02 13:16:04 / OpenCV 4.12.0</small></p>
|
|
</footer>
|
|
</div>
|
|
</div>
|
|
</body>
|
|
</html>
|