Full Coverage: Depth Sensor In Mobile Phones

Technology is becoming smarter by the day; a lot of unexpected inventions have been released so it will not be surprising when more is introduced. Depth sensor is an invention that is making wave in the world of mobile phone technology;

Many smartphone manufacturers have incorporated this brilliant idea into the software of their mobile device.

During the launch of iPhone 8, Apple hyped the camera features which consisted of AR capabilities, laser sensors and a dual camera.

A more recent smartphone with 3D depth sensor will be the Samsung Galaxy S10 5G and how to use it according to Samsung.

Furthermore, the laser sensor is simply a component of depth sensor and along with the dual camera can create amazing focused image through artificial intelligence.

Before now, a normal camera could convert a 3D world to a 2D image, but with advancement and modifications, the ability of lens entered another level. With improvements in computer vision and deep learning many researchers have been able to invent a system that can understand our environment.

Basically, the computer vision (CV) is able to carry out many tasks such as recognition of hand writing, enabling autonomous vehicle, object classification and more.

However, when it comes to depth sensing the CV performance is limited because the computer vision uses one camera to capture and interpret the surrounding.

Full Coverage: Depth Sensor In Mobile Phones

As humans, we need our two eyes to sense the depth our surrounding, like wise smartphones needed dual lens to be able to capture the depth of field.

There are two elements that are usually present in depth sensing and they are the infra-red project (IR) and the IR camera. While the IR projector emits a pattern of IR light this falls on object around it, the IR camera functions similarly to the RGB camera.

The different between the IR camera and the RGB camera is that the former captures image in infrared color range.

By the way, conventional camera cannot fully carry out the depth sensing fast that is why a dual camera alongside a 2D depth sensor is needed. The depth sensor and the 2D imaging will capture detailed information of the real world. 

Now let’s see some applications of depth sensing but before then let’s see what a depth sensor is.

What are depth sensors?

Depth sensing has been applied in many field one of which is in ships, and has proved really effective. It is essential for a ship moving in an ocean to have a sensing technology because there are many objects inside the ocean that could pose danger to the ship.

Now, what a depth sensing technology simply does in a ship is identify objects in the ocean so that the ship can avoid running into it.

Depth sensor in a Ship

How this technology do this is no mystery, what it simply does is measure the time a sound directed into water takes to return after reflecting of an object.

This process is called the time of flight, it basically gauge distance from the source of sound. Also, the speed of sound in water is fairly constant and is affected by the water density and temperature.

Subsequently, depth sensing in phones also has to do with measurement of placement of an object to the camera. Depth sensing has been applied in facial detection, gesture, proximity and more.

Applications of depth sensing and Al

AR/VR

The AR and VR are used for sensing 3D environment and reconstructing them in the virtual world. Depth sensor is vital for human-machine interaction of the VR/AR machine, a typical example of this is the Google’s project tango.

This uses the depth sensor to accurately measure the environment and informs the graphic algorithm to place virtual content at the proper position.

On the other hand, the AR Pokémon Go is not able to interact meaningfully with the environment; this is why the Pokémon is usually placed in inaccurate positions.

The reason behind this is the lack of depth sensor in the algorithm of AR mode Pokémon Go. Depth of field makes it possible for device to respond accurately to the 3D movement of the user.

Robotics

robotics

Depth sensor has also been applies in navigation, localization, mapping and avoiding collision.

Mobile vehicles are able to move from one place to another with the assistance of depth sensing, the technology allows the vehicle to detect where it is on the environment.

This is not only in the case of vehicles, robots also uses depth sensor. The sensor assists the robot in knowing a particular object is and how to get it.

Facial recognition

Facial recognition

The depth sensing technology is also useful for security purposes, some facial recognition make use of 2D cameras to capture a photo and sends it to the systems algorithm where it identifies the person.

This is not entirely save as someone can easily fake identity and the system will still detect wrong.

However, a 3D camera alongside depth sensing can intelligently detect a fake identity and can differentiate a 2D photo from a 3D face.

In addition, the 3D sensor is able to communicate meaningfully with the facial features making it possible for the system to detect accurately.

A typical example of a device that comes with a 3D camera and depth sensing is Apple’s iPhone 8, the phone is able to capture and also detect the user’s face.

Gesture and proximity detection

This has to do with gaming and security as well, many smartphone manufacturers have incorporated the depth sensing for this purpose.

What the depth sensor simply does is detect the depth of information.

For instance, it can sense how hard you press in hand gesture detection, and face for proximity detection. This does not need too much algorithm, a depth sensor and a simple optic can do the trick.

Depth sensing in computer vision

While computer vision (CV) has achieved a lot in the world of technology, there is a need for improvements.

For better performance, depth sensing has been applied in computer vision; this allows you to carry out semantic segmentation which has to do with dividing a whole picture into different sections in your field of view.

Worthy of note is that the depth sensor is very important for computer vision.  Humans are able to analysis and segment image with the two eyes as they can access the 3D world without any training.

Computer vision (CV) uses RGB camera so segmentation is based on statistical modeling.

Even with deep learning, segmentation is not effective as only typical clues like change of color, edge etc. can be remembered.

The learning base segmentation does not have information to fully access 3D world plus the computerization is nothing to write home about.

Deep learning segmentation uses large amount of power creating a huge loophole in its utilization.

When the computer has good information of the 3D world, the CV will do better at segmentation and at lower power consumption. It can be achieved by basic growing or cluster algorithm.

Presently, the only issue that is been faced by the 3D sensing based CV algorithm is the data set.

Although data augmentation combined with other techniques can aid in establishing a neutral network for 3D information.

And this can be achieved without having to compute huge amount of data, so large data set is needed to benchmark a fully optimized network.

Methods of Depth Sensing

Time of flight (ToF)

Time of flight can be applied in two ways, in the first approach; a laser source sends out a pulse and detects the pulse.  The pulse reflection detected by the sensor is away from the target object and is recorded as time of flight.

With this information, the system can calculate how far away the target object is and for accuracy the pulse period must be short. Furthermore, due to the use of high resolution time-to-digital converter, the system consumes more power.

Aside the high resolution time-to-digital converter, time can also be calculated by sending a modulated light source. Once the modulated light source is sent, a sensor detects a phase change of light reflected.

This method mixing using techniques is actually easier than sending pulses and this is because the mixing technique is easier to implement.

Also, an LED can be used as modulated light source instead of a laser. As a matter of fact, the Time of Flight or ToF depends on the emitting power of the light source.

Camera array

This depth sensing method uses multiple cameras place at different locations to capture multiple images at the same target. In this case, the depth is calculated based on geometry and the process is called “stereo view” or “stereoscopic” in computer vision (CV).

Apart from the triple camera array, one popular camera array is the dual camera which is separated by a few millimeters similar to the human eyes.

One major problem of the camera array is finding a matching a matching point in multiple images and this is because the CV algorithm is complicated.

A loophole to this is that deep learning can actually figure out matching Points and with accuracy. However, the deep learning technology comes with high computation cost. With the high cost rate of the deep learning technology, phones that feature it will be expensive.

Structural light

Structural light is another depth sensor technique;

In this case, a laser is used to project a known pattern while a receiver detects a distortion in the reflected pattern.

This misrepresentation of reflected pattern is used to calculate the depth map using geometry, even though the structural light method takes a lot of time to figure the depth map. It is very accurate.

Moreover, the structural light method is sensitive to the brightness of the environment, so it is mostly applied or uses in low light conditions or in doors.

In terms of depth sensing range, this method has the shortest range.

LEAVE A REPLY

Please enter your comment!
Please enter your name here