Discover
Computer Vision Decoded

Computer Vision Decoded
Author: EveryPoint
Subscribed: 43Played: 421Subscribe
Share
© EveryPoint, Inc
Description
A tidal wave of computer vision innovation is quickly having an impact on everyone's lives, but not everyone has the time to sit down and read through a bunch of news articles and learn what it means for them. In Computer Vision Decoded, we sit down with Jared Heinly, the Chief Scientist at EveryPoint, to discuss topics in today’s quickly evolving world of computer vision and decode what they mean for you. If you want to be sure you understand everything happening in the world of computer vision, don't miss an episode!
18 Episodes
Reverse
In this episode of Computer Vision Decoded, we bring to you a live recording of Jared Heinly presentation on the evolution of image based 3D reconstruction. This recording was from a Computer Vision Decoded meetup in Pittsburgh with a visual component. If you would like to tune in with the visuals, we recommend watching the episode on our YouTube channel: https://youtu.be/Gwib5IcTKHIFollow:Jared on X: https://x.com/JaredHeinlyJonathan on X: https://x.com/jonstephens85This episode is brought to you by EveryPoint. Learn more about how EveryPoint is building an infinitely scalable data collection and processing platform for the next generation of spatial computing applications and services. Learn more at https://www.everypoint.io
In this episode of Computer Vision Decoded, our hosts Jonathan Stephens and Jared Heinly are joined by Ruilong Li, a researcher at NVIDIA and key contributor to both Nerfstudio and gsplat, to dive deep into 3D Gaussian Splatting. They explore how this relatively new technology works, from the fundamentals of gaussian representations to the optimization process that creates photorealistic 3D scenes. Ruilong explains the technical details behind gaussian splatting, and discusses the development of the popular gsplat library. The conversation covers practical advice for capturing high-quality data, the iterative training process, and how Gaussian splatting compares to other 3D representations like meshes and NeRFs.Links: gsplat: https://github.com/nerfstudio-project/gsplatNerfstudio: https://docs.nerf.studio/Follow:Ruilong on X: https://x.com/ruilong_liJared on X: https://x.com/JaredHeinlyJonathan on X: https://x.com/jonstephens85This episode is brought to you by EveryPoint. Learn more about how EveryPoint is building an infinitely scalable data collection and processing platform for the next generation of spatial computing applications and services. Learn more at https://www.everypoint.io
In this episode of Computer Vision Decoded, hosts Jonathan Stephens and Jared Heinly explore the various types of cameras used in computer vision and 3D reconstruction. They discuss the strengths and weaknesses of smartphone cameras, DSLR and mirrorless cameras, action cameras, drones, and specialized cameras like 360, thermal, and event cameras. The conversation emphasizes the importance of understanding camera specifications, metadata, and the impact of different lenses on image quality. The hosts also provide practical advice for beginners in 3D reconstruction, encouraging them to start with the cameras they already own.TakeawaysSmartphones are versatile and user-friendly for photography.RAW images preserve more data than JPEGs, aiding in post-processing.Mirrorless and DSLR cameras offer better low-light performance and lens flexibility.Drones provide unique perspectives and programmable flight paths for capturing images.360 cameras allow for quick scene capture but may require additional processing for 3D reconstruction.Event cameras capture rapid changes in intensity, useful for robotics applications.Thermal and multispectral cameras are specialized for specific applications, not typically used for 3D reconstruction.Understanding camera metadata is crucial for effective image processing.Choosing the right camera depends on the specific needs of the project.Starting with a smartphone is a low barrier to entry for beginners in 3D reconstruction.This episode is brought to you by EveryPoint. Learn more about how EveryPoint is building an infinitely scalable data collection and processing platform for the next generation of spatial computing applications and services: https://www.everypoint.io
In this episode, Jonathan Stephens and Jared Heinly delve into the intricacies of COLMAP, a powerful tool for 3D reconstruction from images. They discuss the workflow of COLMAP, including feature extraction, correspondence search, incremental reconstruction, and the importance of camera models. The conversation also covers advanced topics like geometric verification, bundle adjustment, and the newer GLOMAP method, which offers a faster alternative to traditional reconstruction techniques. Listeners are encouraged to experiment with COLMAP and learn through hands-on experience.This episode is brought to you by EveryPoint. Learn more about how EveryPoint is building an infinitely scalable data collection and processing platform for the next generation of spatial computing applications and services: https://www.everypoint.io
In this episode, we discuss practical tips and challenges in 3D reconstruction from images, focusing on various environments such as urban, indoor, and outdoor settings. We explore issues like repetitive structures, lighting conditions, and the impact of reflections and shadows on reconstruction quality. The conversation also touches on the importance of camera motion, lens distortion, and the role of machine learning in enhancing reconstruction processes. Listeners gain insights into optimizing their 3D capture techniques for better results.Key TakeawaysRepetitive structures can confuse computer vision algorithms.Lighting conditions greatly affect image quality and reconstruction accuracy.Wide-angle lenses can help capture more unique features.Indoor environments present unique challenges like textureless walls.Aerial imaging requires careful management of lens distortion.Understanding the application context is crucial for effective 3D reconstruction.Camera motion should be varied to avoid distortion and drift.Planning captures based on goals can lead to better results.This episode is brought to you by EveryPoint. Learn more about how EveryPoint is building an infinitely scalable data collection and processing platform for the next generation of spatial computing applications and services. Learn more at https://www.everypoint.io
In this episode of Computer Vision Decoded, Jonathan Stephens and Jared Heinly explore the concept of depth maps in computer vision. They discuss the basics of depth and depth maps, their applications in smartphones, and the various types of depth maps. The conversation delves into the role of depth maps in photogrammetry and 3D reconstruction, as well as future trends in depth sensing and machine learning. The episode highlights the importance of depth maps in enhancing photography, gaming, and autonomous systems.Key Takeaways:Depth maps represent how far away objects are from a sensor.Smartphones use depth maps for features like portrait mode.There are multiple types of depth maps, including absolute and relative.Depth maps are essential in photogrammetry for creating 3D models.Machine learning is increasingly used for depth estimation.Depth maps can be generated from various sensors, including LiDAR.The resolution and baseline of cameras affect depth perception.Depth maps are used in gaming for rendering and performance optimization.Sensor fusion combines data from multiple sources for better accuracy.The future of depth sensing will likely involve more machine learning applications.Episode Chapters00:00 Introduction to Depth Maps00:13 Understanding Depth in Computer Vision06:52 Applications of Depth Maps in Photography07:53 Types of Depth Maps Created by Smartphones08:31 Depth Measurement Techniques16:00 Machine Learning and Depth Estimation19:18 Absolute vs Relative Depth Maps23:14 Disparity Maps and Depth Ordering26:53 Depth Maps in Graphics and Gaming31:24 Depth Maps in Photogrammetry34:12 Utilizing Depth Maps in 3D Reconstruction37:51 Sensor Fusion and SLAM Technologies41:31 Future Trends in Depth Sensing46:37 Innovations in Computational PhotographyThis episode is brought to you by EveryPoint. Learn more about how EveryPoint is building an infinitely scalable data collection and processing platform for the next generation of spatial computing applications and services. Learn more at https://www.everypoint.io
After an 18 month hiatus, we are back! In this episode of Computer Vision Decoded, hosts Jonathan Stephens and Jared Heinly discuss the latest advancements in computer vision technology, personal updates, and insights from the industry. They explore topics such as real-time 3D reconstruction, computer vision research, SLAM, event cameras, and the impact of generative AI on robotics. The conversation highlights the importance of merging traditional techniques with modern machine learning approaches to solve real-world problems effectively.Chapters00:00 Intro & Personal Updates04:36 Real-Time 3D Reconstruction on iPhones09:40 Advancements in SfM14:56 Event Cameras17:39 Neural Networks in 3D Reconstruction26:30 SLAM and Machine Learning Innovation29:48 Applications of SLAM in Robotics34:19 NVIDIA's Cosmos and Physical AI40:18 Generative AI for Real-World Applications43:50 The Future of Gaussian Splatting and 3D ReconstructionThis episode is brought to you by EveryPoint. Learn more about how EveryPoint is building an infinitely scalable data collection and processing platform for the next generation of spatial computing applications and services: https://www.everypoint.io
In this episode of Computer Vision Decoded, we are going to dive into our in-house computer vision expert's reaction to the iPhone 15 and iPhone 15 Pro announcement.We dive into the camera upgrades, decode what a quad sensor means, and even talk about the importance of depth maps.Episode timeline:00:00 Intro02:59 iPhone 15 Overview05:15 iPhone 15 Main Camera07:20 Quad Pixel Sensor Explained15:45 Depth Maps Explained22:57 iPhone 15 Pro Overview27:01 iPhone 15 Pro Cameras32:20 Spatial Video36:00 A17 Pro ChipsetThis episode is brought to you by EveryPoint. Learn more about how EveryPoint is building an infinitely scalable data collection and processing platform for the next generation of spatial computing applications and services: https://www.everypoint.io
In this episode of Computer Vision Decoded, we are going to dive into Pierre Moulon's 10 years experience building OpenMVG. We also cover the impact of open-source software in the computer vision industry and everything involved in building your own project. There is a lot to learn here!Our episode guest, Pierre Moulon, is a computer vision research scientist and creator of OpenMVG - a library for computer-vision scientists and targeted for the Multiple View Geometry community.The episode follow's Pierre's journey building OpenMVG which he wrote about as an article in his GitHub repository.Explore OpenMVG on GitHub: https://github.com/openMVG/openMVG Pierre's article on building OpenMVG: https://github.com/openMVG/openMVG/discussions/2165Episode timeline:00:00 Intro01:00 Pierre Moulon's Background04:40 What is OpenMVG?08:43 What is the importance of open-source software for the computer vision community?12:30 What to look for deciding to use an opensource project16:27 What is Multi View Geometry?24:24 What was the biggest challenge building OpenMVG?31:00 How do you grow a community around an open-source project38:09 Choosing a licensing model for your open-source project43:07 Funding and sponsorship for your open-source project46:46 Building an open-source project for your resume49:53 How to get started with OpenMVGContact:Follow Pierre Moulon on LinkedIn: https://www.linkedin.com/in/pierre-moulon/Follow Jared Heinly on Twitter: https://twitter.com/JaredHeinlyFollow Jonathan Stephens on Twitter at: https://twitter.com/jonstephens85This episode is brought to you by EveryPoint. Learn more about how EveryPoint is building an infinitely scalable data collection and processing platform for the next generation of spatial computing applications and services: https://www.everypoint.io
In this episode of Computer Vision Decoded, we are going to dive into implicit neural representations.We are joined by Itzik Ben-Shabat, a Visiting Research Fellow at the Australian National Universit (ANU) and Technion – Israel Institute of Technology as well as the host of the Talking Paper Podcast.You will learn a core understanding of implicit neural representations, key concepts and terminology, how it's being used in applications today, and Itzik's research into improving output with limit input data.Episode timeline:00:00 Intro01:23 Overview of what implicit neural representations are04:08 How INR compares and contrasts with a NeRF08:17 Why did Itzik pursued this line of research10:56 What is normalization and what are normals 13:13 Past research people should read to learn about the basics of INR 16:10 What is an implicit representation (without the neural network)24:27 What is DiGS and what problem with INR does it solve?35:54 What is OG-I NR and what problem with INR does it solve?40:43 What software can researchers use to understand INR?49:15 What information should non-scientists be focused to learn about INR?Itzik's Website: https://www.itzikbs.com/Follow Itzik on Twitter: https://twitter.com/sitzikbsFollow Itzik on LinkedIn: https://www.linkedin.com/in/yizhak-itzik-ben-shabat-67b3b1b7/Talking Papers Podcast: https://talking.papers.podcast.itzikbs.com/Follow Jared Heinly on Twitter: https://twitter.com/JaredHeinlyFollow Jonathan Stephens on Twitter at: https://twitter.com/jonstephens85Referenced past episode- What is CVPR: https://share.transistor.fm/s/15edb19dThis episode is brought to you by EveryPoint. Learn more about how EveryPoint is building an infinitely scalable data collection and processing platform for the next generation of spatial computing applications and services: https://www.everypoint.io
In this episode of Computer Vision Decoded, we are going to dive into 4 different ways to 3D reconstruct a scene with images. Our cohost Jared Heinly, a PhD in the computer science specializing in 3D reconstruction from images, will dive into the 4 distinct strategies and discuss the pros and cons of each.Links to content shared in this episode:Live SLAM to measure a stockpile with SR Measure: https://srmeasure.com/professionalJared's notes on the iPhone LiDAR and SLAM: https://everypoint.medium.com/everypoint-gets-hands-on-with-apples-new-lidar-sensor-44eeb38db579How to capture images for 3D reconstruction: https://youtu.be/AQfRdr_gZ8g00:00 Intro01:30 3D Reconstruction from Video13:48 3D Reconstruction from Images28:05 3D Reconstruction from Stereo Pairs38:43 3D Reconstruction from SLAMFollow Jared Heinly Twitter: https://twitter.com/JaredHeinlyLinkedIn https://www.linkedin.com/in/jheinly/Follow Jonathan StephensTwitter: https://twitter.com/jonstephens85LinkedIn: https://www.linkedin.com/in/jonathanstephens/This episode is brought to you by EveryPoint. Learn more about how EveryPoint is building an infinitely scalable data collection and processing platform for the next generation of spatial computing applications and services: https://www.everypoint.io
Join our guest, Keith Ito, founder of Scaniverse as we discuss the challenges of creating a 3D capture app for iPhones. Keith goes into depth on balancing speed with quality of 3D output and how he designed an intuitive user experience for his users.In this episode, we discuss…01:00 - Keith's Ito's background at Google09:44 - What is the Scaniverse app11:43 - What inspired Keith to build Scaniverse17:37 - The challenges of using LiDAR in the early versions of Scaniverse25:54 - How to build a good user experience for 3D capture apps32:00 - The challenges of running photogrammetry on an iPhone37:07 - The future of 3D capture40:57 - Scaniverse's role at NianticLearn more about Scaniverse at: https://scaniverse.com/Follow Keith Ito on Twitter at: https://twitter.com/keeetoFollow Jared Heinly on Twitter: https://twitter.com/JaredHeinlyFollow Jonathan Stephens on Twitter: https://twitter.com/jonstephens85Follow Jonathan Stephens on LinkedIn: https://www.linkedin.com/in/jonathanstephens/-----This episode is brought to you by EveryPoint. Learn more about how EveryPoint is building an infinitely scalable data collection and processing platform for the next generation of spatial computing applications and services: https://www.everypoint.io
In this episode of Computer Vision Decoded, we are going to dive into one of the hottest topics in the industry: Neural Radiance Fields (NeRFs)We are joined by Matt Tancik, a student pursuing a PhD in the computer science and electrical engineering department at UC Berkeley. He has also contributed research to the original NeRF project in 2020 along with several others since then.Last but not least, he is building NeRFStudio - a collaboration friendly studio for NeRFs.In this episode you will learn about what NeRFs are and more importantly what they are not. Matt goes into the challenges of large scale NeRF creation with his experience with Block-NeRF.Follow Matt's work at https://www.matthewtancik.com/Get started with Nerfstudio here: https://docs.nerf.studio/en/latest/Block-NeRF details: https://waymo.com/research/block-nerf/00:00 Intro00:45 Matt’s Background Into NeRF Research 04:00 What is a NeRF and how it is different from photogrammetry11:57 Can geometry be extracted from NeRFs?15:30 Will NeRFs supersede photogrammetry in the future? 22:47 Block-NeRF and the pros and cons of using 360 cameras25:30 What is the goal of Block-NeRF30:44 Why do NeRFs need large GPUs to compute?35:45 Meshes to simulate NeRF visualizations40:28 What is Nerfstudio?47:40 How to get started with NerfstudioFollow Jared Heinly on Twitter: https://twitter.com/JaredHeinlyFollow Jonathan Stephens on Twitter at: https://twitter.com/jonstephens85This episode is brought to you by EveryPoint. Learn more about how EveryPoint is building an infinitely scalable data collection and processing platform for the next generation of spatial computing applications and services: https://www.everypoint.io
In this episode of Computer Vision Decoded, we are going to dive into image capture best practices for 3D reconstruction.At the end of this livestream, you will have learned the basics for capturing scenes and objects. We will also provide a downloadable visual guide for reference on your next 3D reconstruction project.Download the official guide here to follow along: https://tinyurl.com/4n2wspkn00:00 Intro04:40 Camera motion overview07:15 Good camera motions18:43 Transition camera motions30:39 Bad camera motions39:27 How to combine camera motions49:16 Loop Closure57:42 Image Overlap1:14:00 Lighting and camera gearWatch out episode of Computer Vision in the Wild to learn more about capturing images outside and in busy locations: https://youtu.be/FwVBR6KFjPIFollow Jared Heinly on Twitter: https://twitter.com/JaredHeinlyFollow Jonathan Stephens on Twitter at: https://twitter.com/jonstephens85This episode is brought to you by EveryPoint. Learn more about how EveryPoint is building an infinitely scalable data collection and processing platform for the next generation of spatial computing applications and services: https://www.everypoint.io
In this episode of Computer Vision Decoded, we join Jared Heinly and Jonathan Stephens from EveryPoint for their live reaction to the iPhone 14 series announcement. They go in depth into what all the camera specs mean to the average person. We also explain basics of computational photography and how Apple is able to get great photos from a small camera sensor.00:00 Intro02:43 Apple Watch Review06:58 Airpods Pro Review 09:40 iPhone 14 Initial Reaction15:05 iPhone 14 Camera Specs Breakdown37:13 iPhone 14 Pro Initial Reaction40:47 iPhone 14 Pro Camera Specs BreakdownFollow Jared Heinly on TwitterFollow Jonathan Stephens on TwitterThis episode is brought to you by EveryPoint. Learn more about how EveryPoint is building an infinitely scalable data collection and processing platform for the next generation of spatial computing applications and services: https://www.everypoint.io
In this episode of Computer Vision Decoded, we sit down with Jared Heinly, Chief Scientist at EveryPoint, to discuss 3D reconstruction in the wild. What does “in the wild” mean? This means 3D reconstructing objects and scenes in non-controlled environments where you may have limitations with lighting, access, reflective surfaces, etc.00:00 Intro01:30: What are Duplicate Scene Structures and How to Avoid Them14:30: How Jared used 100 million crowdsourced photos to 3d reconstruct 12,903 landmarks27:10: The benefits of capturing video for 3D reconstruction31:30: The benefits of using a drone to capture stills for 3D reconstruction34:20: Considerations for using installed cameras for 3d reconstruction38:30: How to work with sun issues44:25: Determining how far from the object you should be when capturing images50:35: How to capture objects with reflective surfaces53:40: How work around scene obstructions57:20: What cameras you should useJared Heinly’s Academic Papers and ProjectsPaper: Correcting the Duplicate Scene Structure In Sparse 3D ReconstructionProject: Reconstructing the World in Six DaysVideo: Reconstructing the world in Six DaysFollow Jared Heinly on TwitterFollow Jonathan Stephens on TwitterThis episode is brought to you by EveryPoint. Learn more about how EveryPoint is building an infinitely scalable data collection and processing platform for the next generation of spatial computing applications and services: https://www.everypoint.io
In this episode of Computer Vision Decoded we dive into Jared Heinly's recent trip to the CVPR Conference. We cover: what the conference about, who should attend, what are the emerging trends in computer vision, how machine learning is being used in 3D reconstruction, and what NeRFs are for.00:00 - Introduction00:36 - What is CVPR?02:49 - Who should attend CVPR?08:11 - What are emerging trends in Computer Vision?14:34 - What is the value of NeRFs?20:55 - How should you attend as a non-scientist or academic?Follow Jared Heinly on TwitterFollow Jonathan Stephens on TwitterCVPR ConferenceEpisode sponsored by: EveryPoint
In this inaugural episode of Computer Vision Decoded we dive into the recent announcements at WWDC 2022 and find out what they mean for the computer vision community. We talk about what Apple is doing with their new RoomPlan API and how computer vision scientists can leverage it for better experiences. We also cover the enhancements to video and photo capture during an active ARKit Session.00:00 - Introduction00:25 - Meet Jared Heinly02:10 - RoomPlan API06:23 - Higher Resolution Video with ARKit09:17 - The importance of pixel size and density13:13 - Copy and Paste Objects from Photos16:47 - CVPR Conference OverviewFollow Jared Heinly on TwitterFollow Jonathan Stephens on TwitterLearn about RoomPlan API OverviewLearn about ARKit 6 HighlightsCVPR ConferenceEpisode sponsored by: EveryPoint