You might have recently seen Apple putting a LiDAR sensor in their new iPad pro models. But what actually is it? what are its functions? and how does it work? Everything is expalined here.
Lidar is basically a method for measuring distances (ranging) by illuminating the target with laser light and measuring the reflection with a sensor. Differences in laser return times and wavelengths can then be used to make digital 3-D representations of the target. It has terrestrial, airborne, and mobile applications.
The term lidar was originally a combination of light and radar. But It is now also used as an acronym of "light detection and ranging" and "laser imaging, detection, and ranging". Lidar is also sometimes referred to as 3D laser scanning, a special combination of a 3D scanning and laser scanning.
LiDAR uses
LiDAR is commonly used in the following:
1. Make high-resolution maps, with applications in surveying, geodesy, geomatics, archaeology, geography, geology, geomorphology, seismology, forestry, atmospheric physics, laser guidance, airborne laser swath mapping (ALSM), and laser altimetry
2. In control and navigation for some autonomous cars
Components
Lidar systems consist of several major components. These are listed below:
Laser
600–1000 nm lasers are most common for non-scientific applications. The maximum power of the laser is limited, or an automatic shut off system which turns the laser off at specific altitudes is used in order to make it eye safe for the people in the ground.
One common alternative, 1550 nm lasers, are eye-safe at relatively high power levels since this wavelength is not strongly absorbed by the eye, but the detector technology is less advanced and so these wavelengths are generally used at longer ranges with lower accuracies. They are also used for military applications because 1550 nm is not visible in night vision goggles, unlike the shorter 1000 nm infrared laser.
Airborne topographic mapping lidars generally use 1064 nm diode pumped YAG lasers, while bathymetric (underwater depth research) systems generally use 532 nm frequency doubled diode pumped YAG lasers because 532 nm penetrates water with much less attenuation than does 1064 nm. Laser settings include the laser repetition rate (which controls the data collection speed). Pulse length is generally an attribute of the laser cavity length, the number of passes required through the gain material (YAG, YLF, etc.), and Q-switch (pulsing) speed. Better target resolution is achieved with shorter pulses, provided the lidar receiver detectors and electronics have sufficient bandwidth.
Flash lidar
In flash lidar, the entire field of view is illuminated with a wide diverging laser beam in a single pulse. This is in contrast to conventional scanning lidar, which uses a collimated laser beam that illuminates a single point at a time, and the beam is raster scanned to illuminate the field of view point-by-point. This illumniation method requires a different detection scheme as well. In both scanning and flash lidar, a time-of-flight camera is used to collect information about both the 3D location and intensity of the light incident on it in every frame. However, in scanning lidar, this camera contains only a point sensor, while in flash lidar, the camera contains either a 1D or a 2D sensor array, each pixel of which collects 3D location and intensity information. In both cases, the depth information is collected using the time of flight of the laser pulse (i.e., the time it takes each laser pulse to hit the target and return to the sensor), which requires the pulsing of the laser and acquisition by the camera to be synchronized. The result is a camera that takes pictures of distance, instead of colors.
As with all forms of lidar, the onboard source of illumination makes flash lidar an active sensor. The signal that is returned is processed by embedded algorithms to produce a nearly instantaneous 3D rendering of objects and terrain features within the field of view of the sensor. The laser pulse repetition frequency is sufficient for generating 3D videos with high resolution and accuracy. The high frame rate of the sensor makes it a useful tool for a variety of applications that benefit from real-time visualization, such as highly precise remote landing operations. By immediately returning a 3D elevation mesh of target landscapes, a flash sensor can be used to identify optimal landing zones in autonomous spacecraft landing scenarios.
Seeing at a distance requires a powerful burst of light. The power is limited to levels that do not damage human retinas. Wavelengths must not affect human eyes. However, low-cost silicon imagers do not read light in the eye-safe spectrum. Instead, gallium-arsenide imagers are required, which can boost costs to $200,000. Gallium-arsenide is the same compound used to produce high-cost, high-efficiency solar panels usually used in space applications.
Phased arrays
A phased array can illuminate any direction by using a microscopic array of individual antennas. Controlling the timing, that is, the phase of each antenna steers a cohesive signal in a specific direction.
Several companies are working on developing commercial solid-state lidar units, including the company Quanergy which is designing a 905 nm solid state device, although they appear to be having some issues in development.
The control system can change the shape of the lens to enable zoom in/zoom out functions. Specific sub-zones can be targeted at sub-second intervals.
Electromechanical lidar lasts for between 1,000 and 2,000 hours. By contrast, solid-state lidar can run for 100,000 hours.
Microelectromechanical machines
Microelectromechanical mirrors (MEMS) are not entirely solid-state. However, their tiny form factor provides many of the same cost benefits. A single laser is directed to a single mirror that can be reoriented to view any part of the target field. The mirror spins at a rapid rate. However, MEMS systems generally operate in a single plane (left to right). To add a second dimension generally requires a second mirror that moves up and down. Alternatively, another laser can hit the same mirror from another angle. MEMS systems can be disrupted by shock/vibration and may require repeated calibration. The goal is to create a small microchip to enhance innovation and further technological advances.
Scanner and optics
Image development speed is affected by the speed at which they are scanned. Options to scan the azimuth and elevation include dual oscillating plane mirrors, a combination with a polygon mirror and a dual axis scanner. Optic choices affect the angular resolution and range that can be detected. A hole mirror or a beam splitter are options to collect a return signal.
Photodetector and receiver electronics
Two main photodetector technologies are used in lidar: solid state photodetectors, such as silicon avalanche photodiodes, or photomultipliers. The sensitivity of the receiver is another parameter that has to be balanced in a lidar design.
Position and navigation systems
Lidar sensors mounted on mobile platforms such as airplanes or satellites require instrumentation to determine the absolute position and orientation of the sensor. Such devices generally include a Global Positioning System receiver and an inertial measurement unit (IMU).
Sensor
Lidar uses active sensors that supply their own illumination source. The energy source hits objects and the reflected energy is detected and measured by sensors. Distance to the object is determined by recording the time between transmitted and backscattered pulses and by using the speed of light to calculate the distance traveled. Flash LIDAR allows for 3D imaging because of the camera's ability to emit a larger flash and sense the spatial relationships and dimensions of area of interest with the returned energy. This allows for more accurate imaging because the captured frames do not need to be stitched together, and the system is not sensitive to platform motion resulting in less distortion.
3-D imaging can be achieved using both scanning and non-scanning systems. "3-D gated viewing laser radar" is a non-scanning laser ranging system that applies a pulsed laser and a fast gated camera. Research has begun for virtual beam steering using Digital Light Processing (DLP) technology.
Imaging lidar can also be performed using arrays of high speed detectors and modulation sensitive detector arrays typically built on single chips using complementary metal–oxide–semiconductor (CMOS) and hybrid CMOS/Charge-coupled device (CCD) fabrication techniques. In these devices each pixel performs some local processing such as demodulation or gating at high speed, downconverting the signals to video rate so that the array can be read like a camera. Using this technique many thousands of pixels / channels may be acquired simultaneously. High resolution 3-D lidar cameras use homodyne detection with an electronic CCD or CMOS shutter.
A coherent imaging lidar uses synthetic array heterodyne detection to enable a staring single element receiver to act as though it were an imaging array.
In 2014, Lincoln Laboratory announced a new imaging chip with more than 16,384 pixels, each able to image a single photon, enabling them to capture a wide area in a single image. An earlier generation of the technology with one fourth that number of pixels was dispatched by the U.S. military after the January 2010 Haiti earthquake; a single pass by a business jet at 3,000 meters (10,000 ft.) over Port-au-Prince was able to capture instantaneous snapshots of 600-meter squares of the city at a resolution of 30 centimetres (12 in), displaying the precise height of rubble strewn in city streets.[37] The Lincoln system is 10x faster. The chip uses indium gallium arsenide (InGaAs), which operates in the infrared spectrum at a relatively long wavelength that allows for higher power and longer ranges. In many applications, such as self-driving cars, the new system will lower costs by not requiring a mechanical component to aim the chip. InGaAs uses less hazardous wavelengths than conventional silicon detectors, which operate at visual wavelengths.
Devices That Currently Use liDAR
Apple iPad Pro 2020
The new Apple iPad Pro 2020 comes with a liDAR sensor, and we expect, iPhone 12 later this year also comes equipped with the same. Here, LiDAR works similarly to the front-facing TrueDepth sensor, but rather than being optimized for the face, it allows users to scan a depth-accurate depiction of their surroundings.
Post a Comment