نتایج جستجو برای: depth map

تعداد نتایج: 349140  

1999
Caleb Lyness Otto-Carl Marte Bryan Wong Patrick Marais

A system that constructs a three dimensional model using two dimensional images taken from multiple viewpoints is presented. The images used as input were obtained by filming an object, which was being rotated on a turntable, against a dark background. The modelling process begins with the extraction of “silhouettes” from the input images. These silhouettes are used in conjunction with the a ca...

Journal: :CoRR 2018
Amirhossein Jabalameli Nabil Ettehadi Aman Behal

In this paper, we investigate the problem of grasping novel objects in unstructured environments. To address this problem, consideration of the object geometry, reachability and force closure analysis are required. We propose a framework for grasping unknown objects by localizing contact regions on the contours formed by a set of depth edges in a single view 2D depth image. According to the edg...

2005
Michael S. Landy Laurence T. Maloney Mark J. Young

We describe a series of experiments designed to test (1) whether human observers combine depth cues using a weighted average when depth estimates in different maps are nearly consistent, (2) whether human observers behave as robust estimators when depths become increasingly inconsistent, and (3) whether the weights used in the linear rule of combination change to reflect the estimated reliabili...

2018
Lingyun Zhao Miles Hansard Andrea Cavallaro

This paper describes the construction of a layered scene model, based on a single hazy image that has sufficient depth variation. A depth map and radiance image are estimated by standard dehazing methods. The radiance image is then segmented into a small number of clusters, and a corresponding scene plane is estimated for each. This provides the basic structure of a layered scene model, without...

2008
Yo-Sung HO Sung-Yeol KIM Eun-Kyung LEE Yo-Sung Ho

In this paper, we present a new system to generate multiview video sequences with depth information (MVD) by integrating multiple high-definition (HD) camera arrays and one standard-definition (SD) depth camera. In the proposed hybrid camera system, we first create the initial disparity for each HD color image by applying a threedimensional (3-D) warping operation on the depth map acquired by t...

2017
Yang Chen Martin Alain Aljosa Smolic

Depth map estimation is a crucial task in computer vision, and new approaches have recently emerged taking advantage of light fields, as this new imaging modality captures much more information about the angular direction of light rays compared to common approaches based on stereoscopic images or multi-view. In this paper, we propose a novel depth estimation method from light fields based on ex...

2006
Ping Li Dirk Farin Rene Klein Gunnewiek Peter H. N. de With

The depth-image-based rendering technique is a promising technology for three-dimensional television (3D-TV) systems. For such a system, one of the key components is to generate a high-quality per-pixel depth map, particularly for already existing 2D video sequences. This paper proposes a framework for creating the depth map from uncalibrated video sequences of static scenes using the Structure...

2008
Héctor Yela Pere-Pau Vázquez

Volume models often show high depth complexity. This poses difficulties to the observer in judging the spatial relationships accurately. Illustrators usually use certain techniques such as halos or edge darkening in order to enhance depth perception of certain structures. Halos may be dark or light, and even colored. Halo construction on a volumetric basis impacts rendering performance due to t...

2015
Kazuki Matsumoto François de Sorbier Hideo Saito

Recent advances of ToF depth sensor devices enables us to easily retrieve scene depth data with high frame rates. However, the resolution of the depth map captured from these devices is much lower than that of color images and the depth data suffers from the optical noise effects. In this paper, we propose an efficient algorithm that upsamples depth map captured by ToF depth cameras and reduces...

2013
Woo-Seok Jang

In this paper, we propose a direct depth map acquisition method for a convergent camera array as well as a parallel camera array. Since image rectification is necessary for conventional stereo matching methods, disparity values are found in the same horizontal line of stereo images. Consecutively, the acquired disparity values are transformed to depth values. However, image rectification may le...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید

function paginate(evt) { url=/search_year_filter/ var term=document.getElementById("search_meta_data").dataset.term pg=parseInt(evt.target.text) var data={ "year":filter_year, "term":term, "pgn":pg } filtered_res=post_and_fetch(data,url) window.scrollTo(0,0); } function update_search_meta(search_meta) { meta_place=document.getElementById("search_meta_data") term=search_meta.term active_pgn=search_meta.pgn num_res=search_meta.num_res num_pages=search_meta.num_pages year=search_meta.year meta_place.dataset.term=term meta_place.dataset.page=active_pgn meta_place.dataset.num_res=num_res meta_place.dataset.num_pages=num_pages meta_place.dataset.year=year document.getElementById("num_result_place").innerHTML=num_res if (year !== "unfilter"){ document.getElementById("year_filter_label").style="display:inline;" document.getElementById("year_filter_place").innerHTML=year }else { document.getElementById("year_filter_label").style="display:none;" document.getElementById("year_filter_place").innerHTML="" } } function update_pagination() { search_meta_place=document.getElementById('search_meta_data') num_pages=search_meta_place.dataset.num_pages; active_pgn=parseInt(search_meta_place.dataset.page); document.getElementById("pgn-ul").innerHTML=""; pgn_html=""; for (i = 1; i <= num_pages; i++){ if (i===active_pgn){ actv="active" }else {actv=""} pgn_li="
  • " +i+ "
  • "; pgn_html+=pgn_li; } document.getElementById("pgn-ul").innerHTML=pgn_html var pgn_links = document.querySelectorAll('.mypgn'); pgn_links.forEach(function(pgn_link) { pgn_link.addEventListener('click', paginate) }) } function post_and_fetch(data,url) { showLoading() xhr = new XMLHttpRequest(); xhr.open('POST', url, true); xhr.setRequestHeader('Content-Type', 'application/json; charset=UTF-8'); xhr.onreadystatechange = function() { if (xhr.readyState === 4 && xhr.status === 200) { var resp = xhr.responseText; resp_json=JSON.parse(resp) resp_place = document.getElementById("search_result_div") resp_place.innerHTML = resp_json['results'] search_meta = resp_json['meta'] update_search_meta(search_meta) update_pagination() hideLoading() } }; xhr.send(JSON.stringify(data)); } function unfilter() { url=/search_year_filter/ var term=document.getElementById("search_meta_data").dataset.term var data={ "year":"unfilter", "term":term, "pgn":1 } filtered_res=post_and_fetch(data,url) } function deactivate_all_bars(){ var yrchart = document.querySelectorAll('.ct-bar'); yrchart.forEach(function(bar) { bar.dataset.active = false bar.style = "stroke:#71a3c5;" }) } year_chart.on("created", function() { var yrchart = document.querySelectorAll('.ct-bar'); yrchart.forEach(function(check) { check.addEventListener('click', checkIndex); }) }); function checkIndex(event) { var yrchart = document.querySelectorAll('.ct-bar'); var year_bar = event.target if (year_bar.dataset.active == "true") { unfilter_res = unfilter() year_bar.dataset.active = false year_bar.style = "stroke:#1d2b3699;" } else { deactivate_all_bars() year_bar.dataset.active = true year_bar.style = "stroke:#e56f6f;" filter_year = chart_data['labels'][Array.from(yrchart).indexOf(year_bar)] url=/search_year_filter/ var term=document.getElementById("search_meta_data").dataset.term var data={ "year":filter_year, "term":term, "pgn":1 } filtered_res=post_and_fetch(data,url) } } function showLoading() { document.getElementById("loading").style.display = "block"; setTimeout(hideLoading, 10000); // 10 seconds } function hideLoading() { document.getElementById("loading").style.display = "none"; } -->