What is camera interpolation in a phone and what is it for? How to choose a smartphone with a good camera Interpolation is a way to find intermediate values.


Camera interpolation, why and what is it?

  1. Type 8 Mp matrix, and 13 Mp the picture itself
  2. In order not to twist unnecessary wires to the matrix, megapuxels are inflated right in the process.
  3. This is when a pixel is divided into several, so that when enlarged, the image is not in squares. Doesn't add real resolution. Smears the drawing.
  4. interpolation is finding an unknown value from known values.
    the quality of the interpolation in the photo (approximation to the original) will depend on well-designed software
  5. The camera sensor is 8MP, and the image is stretched to 13MP. Disconnect unambiguously. The photos will be 13MP, but the quality will be like 8MP (there will be more digital noise).
  6. The real resolution there is in lines per mm without blurring in any case at 2MP.
  7. Well just bloated pixels
    For example, many web cameras, it is written that 720, etc. you look at the settings and there is 240x320
  8. Interpolation - in a general sense - the use of a less complex function in the calculation in order to achieve a result that is as close as possible to the absolute, achievable only with the help of the most accurate and correct actions.
    In this version - to put it simply, programmers praise themselves that the pictures taken with the phone are slightly different from those taken by more complex devices - cameras.
  1. Loading ... which matrices are better than Live MOS or CMOS ??? "Live MOS Sensor is the trade name for a variety of image sensors developed by Panasonic and also used in Leica products ...
  2. Loading ... what is a Fresnel lens Copying articles from Wikipedia without specifying the source is not good. 1. Fresnel lens 2. Conventional lens The main advantage of a Fresnel lens is its e ...
  3. Loading ... Tell me, is the Fujifilm FinePix S4300 26-ZOOM camera semi-professional? It is an advanced soap dish soap dish, supurzum. unsuitable for photo sessions. look here http://torg.mail.ru/digitalphoto/all/?param280=1712,1711amp;price=22000,100000 Damn, these big ...
  4. Loading ... What is the difference between a DSLR and an optical viewfinder? what's better? Mirrored viewfinder - sighting is carried out using a system of mirrors, the light passes directly through the lens itself and ...
  5. Loading ... What is the difference between CMOS sensors and CCD sensors in CCD cameras? CMOS Matrix (CMOS) is a digital device, so it can be mounted on one chip with all other gut ...

Sensors are devices that detect only grayscale (gradations of light intensity - from completely white to completely black). To enable the camera to distinguish colors, an array of colored filters is applied to the silicon using a photolithography process. In sensors that use microlenses, filters are placed between the lens and the photodetector. In scanners that use trilinear CCDs (three adjacent CCDs that respond to red, blue and green, respectively), or high-end digital cameras, which also use three sensors, light of its specific color is filtered on each sensor. (Note that some cameras with multiple sensors use combinations of several colors in filters, rather than the standard three). But for single-sensor devices like most consumer digital cameras, color filter arrays (CFA) are used to handle different colors.

In order for each pixel to have its own primary color, a filter of the corresponding color is placed above it. The photons, before hitting a pixel, first pass through a filter that only lets through the waves of their own color. Light of a different length will simply be absorbed by the filter. Scientists have determined that any color in the spectrum can be obtained by mixing just a few primary colors. There are three such colors in the RGB model.

Different color filter arrays are developed for each application. But in most digital camera sensors, the most popular are the Bayer pattern filter arrays. This technology was invented in the 70s by Kodak, when it was conducting research in the field of spatial separation. In this system, the filters are alternately staggered, and the number of green filters is twice as many as red or blue. The order is such that the red and blue filters are positioned between the green ones.

This quantitative ratio is explained by the structure of the human eye - it is more sensitive to green light. And the checkerboard pattern ensures the same color images no matter how you hold the camera (vertically or horizontally). When reading information from such a sensor, the colors are recorded sequentially in lines. The first line should be BGBGBG, the next line should be GRGRGR, etc. This technology is called sequential RGB.

In CCD cameras, all three signals are combined together not on the sensor, but in the imaging device, after the signal has been converted from analog to digital. In CMOS sensors, this alignment can occur directly on the chip. In any case, the primary colors of each filter are mathematically interpolated based on the colors of the neighboring filters. Note that in any image, most of the dots are a mixture of primary colors, and only a few truly represent pure red, blue, or green.

For example, to determine the influence of neighboring pixels on the color of the center, linear interpolation will process a 3x3 matrix of pixels. Take, for example, the simplest case - three pixels - with blue, red and blue filters, arranged in one line (BRB). Let's say you are trying to get the resulting color value of a red pixel. If all colors are equal, then the color of the central pixel is calculated mathematically as two parts of blue to one part of red. In fact, even simple linear interpolation algorithms are much more complex, they take into account the values ​​of all surrounding pixels. If the interpolation is bad, then you get teeth at the boundaries of the color change (or color artifacts appear).

Note that the word "resolution" in the field of digital graphics is used incorrectly. Purists (or pedants, whichever you prefer) familiar with photography and optics know that resolution is a measure of the ability of the human eye or instrument to distinguish between individual lines on a resolution grid, such as the ISO grid shown below. But in the computer industry, it is customary to call the number of pixels by resolution, and since this is the custom, we will also follow this convention. Indeed, even the developers call the number of pixels in the sensor resolution.


Let's count?

The image file size depends on the number of pixels (resolution). The more pixels, the larger the file. For example, the image of VGA standard sensors (640x480 or 307200 active pixels) will occupy about 900 kilobytes in uncompressed form. (307200 pixels by 3 bytes (R-G-B) = 921600 bytes, which is approximately equal to 900 kilobytes) An image of a 16 MP sensor will take up about 48 megabytes.

It would seem that it is so - to count the number of pixels in the sensor to determine the size of the resulting image. However, camera manufacturers provide a bunch of different numbers, and each time they claim that this is the true resolution of the camera.

Total pixels include all pixels physically present on the sensor. But only those that participate in the acquisition of the image are considered active. About five percent of all pixels will not participate in the image acquisition. These are either defective pixels or pixels used by the camera for another purpose. For example, masks may exist to determine the level of dark current or to determine the aspect ratio.

Aspect Ratio - the ratio between the width and height of the sensor. In some sensors, for example, with a resolution of 640x480, this ratio is 1.34: 1, which corresponds to the aspect ratio of most computer monitors. This means that the images created by such sensors will fit exactly on the monitor screen, without pre-cropping. In many cameras, the aspect ratio is the same as the format of traditional 35mm film, where the ratio is 1: 1.5. This allows you to take pictures of a standard size and shape.


Resolution Interpolation

In addition to optical resolution (the real ability of pixels to respond to photons), there is also a resolution increased by a software and hardware complex using interpolating algorithms. As with color interpolation, resolution interpolation analyzes the data of neighboring pixels mathematically. In this case, as a result of interpolation, intermediate values ​​are created. Such "embedding" of new data can be done quite smoothly, while the interpolated data will be something in between, between the real optical data. But sometimes, during such an operation, various interference, artifacts, and distortions may appear, as a result of which the image quality will only deteriorate. Therefore, many pessimists believe that resolution interpolation is not at all a way to improve image quality, but only a method of enlarging files. When choosing a device, pay attention to the resolution indicated. Don't be too happy about high interpolated resolution. (It is marked as interpolated or enhanced.)

Another image processing process at the software level is Sub-sampling. In fact, this is the reverse process of interpolation. This process is carried out at the stage of image processing, after the data has been converted from analog to digital form. This deletes the data of the different pixels. In CMOS sensors, this operation can be carried out on the chip itself, temporarily disabling the reading of certain lines of pixels, or reading data only from selected pixels.

Downsampling has two functions. First, to compact the data - to store more images in memory of a certain size. The smaller the number of pixels, the smaller the file size, and the more pictures you can fit on a memory card or in the internal memory of the device, and the less often you have to download photos to a computer or change memory cards.

The second function of this process is to create images of a specific size for specific purposes. Cameras with a 2MP sensor are quite tough to take a standard photo of 8x10 inches. But if you try to send such a photo by mail, it will noticeably increase the size of the letter. Downsampling allows you to process the image so that it looks good on your friends' monitors (if you don't set the goal for detail) and at the same time is sent quickly enough even on machines with slow connections.

Now that we are familiar with the principles of sensors, we know how the image is obtained, let's look a little deeper and touch on more complex situations that arise in digital photography.

A built-in camera is not the last thing when choosing a smartphone. For many, this parameter is important, so many, when looking for a new smartphone, pay attention to how many megapixels are declared in the camera. At the same time, knowledgeable people know that they are not the point. So let's take a look at what to look for when choosing a smartphone with a good camera.

How a smartphone will shoot depends on which camera module is installed in it. It looks like in the photo (the modules of the front and main cameras look about the same). It fits easily into the body of a smartphone and is usually attached with a ribbon cable. This method makes it easy to replace it in the event of a breakdown.

Sony is the monopoly on the market. It is her cameras, in the overwhelming majority, that are used in smartphones. OmniVision and Samsung are also involved in production.

The smartphone manufacturer itself is also important. In fact, a lot depends on the brand, and a self-respecting company will equip its device with a really good camera. But let's figure out what determines the quality of shooting a smartphone point by point.

CPU

Are you surprised? It is the processor that will start processing the image when it receives data from the photomatrix. No matter how high-quality the matrix is, a weak processor will not be able to process and transform the information it receives from it. This applies not only to video recording in high resolution and fast frame rate change, but also to the creation of high-resolution images.

Of course, the more frames per second change, the greater the load on the processor.

Among people who understand phones, or think that they understand, there is an opinion that smartphones with American Qualcomm processors shoot better than smartphones based on Taiwanese MediaTek processors. I will not refute or confirm this. Well, the fact that there are no smartphones with excellent cameras on low-performance Chinese Spreadtrum processors, as of 2016, is already a fact.

Megapixels

The picture consists of pixels (dots), which are formed by the photomatrix during shooting. Of course, the more pixels, the better the image should be, the higher its clarity. In cameras, this parameter is indicated as megapixels.

Megapixels (Mp, Mpx, Mpix) is a measure of the resolution of photos and videos (number of pixels). One megapixel equals one million pixels.

Take, for example, the Fly IQ4516 Tornado Slim smartphone. It shoots photos at a maximum resolution of 3264x2448 pixels (3264 color dots in width and 2448 in height). We multiply 3264 beeps by 2448 beeps, it turns out 7,990,272 pixels. The number is large, therefore it is translated into the meaning of Mega. That is, the number of 7,990,272 pixels is approximately 8 million pixels, that is, 8 megapixels.

In theory, more squeaks means a clearer picture. But do not forget about noise, about the deterioration of shooting in low light, etc.

Interpolation

Unfortunately, many Chinese smartphone manufacturers do not hesitate to programmatically increase the resolution. This is called interpolation. When the camera can take a picture at a maximum resolution of 8 megapixels, and it is programmatically increased to 13 megapixels. Of course, the quality is better not to become. How not to be deceived in such a case? Search the Internet for information on which camera module is used in your smartphone. In the characteristics of the module, it is indicated in what resolution it shoots. If you have not found information about the module, there is already a reason to be wary. Sometimes the characteristics of a smartphone can honestly indicate that the camera is interpolated, for example, from 13 megapixels to 16 megapixels.

Software

Don't underestimate the software that processes a digital image and presents it to us as we see it on screen. It detects color reproduction, removes noise, provides image stabilization (when the smartphone twitches in the hand when shooting), etc. Not to mention the various shooting modes.

Camera matrix

The type of matrix (CCD or CMOS) and its size are important. It is she who captures the image and transfers it to the processor for processing. The resolution of the camera depends on the matrix.

Aperture (aperture)

When choosing a smartphone with a good camera, you should pay attention to this parameter. Roughly speaking, it indicates how much light the matrix receives through the optics of the module. The bigger, the better. Less set - more noise. The aperture is denoted by the letter F followed by a slash (/). After the slash, the aperture value is indicated, and the smaller it is, the better. As an example, it is indicated like this: F / 2.2, F / 1.9. It is often indicated in the technical specifications of a smartphone.

A camera with an f / 1.9 aperture will shoot better in low light than a camera with an f / 2.2 aperture, since more light gets into the sensor. But stabilization is also important in this case, both software and optical.

Optical stabilization

Smartphones are rarely equipped with optical stabilization. As a rule, these are expensive devices with an advanced camera. Such a device can be called a camera phone.

Shooting with a smartphone is carried out with a moving hand and optical stabilization is used to prevent blurring of the image. There may be hybrid stabilization (software + optical). Optical stabilization is especially important at long shutter speeds, when, due to insufficient illumination, a picture can be taken for 1-3 seconds in a special mode.

Flash

The flash can be LED and xenon. The latter will provide much better photos in low light conditions. There is a dual LED flash. Rarely, but there may be two: LED and xenon. This is the best option. Implemented in the Samsung M8910 Pixon12 camera phone.

As you can see, how the smartphone will shoot depends on many parameters. So when choosing, in the characteristics, you should pay attention to the name of the module, aperture, the presence of optical stabilization. It is best to search the Internet for reviews of a particular phone, where you can see examples of pictures, as well as the author's opinion on the camera.

The mobile phone market is filled with models with huge resolution cameras. There are even relatively inexpensive smartphones with 16-20 megapixel sensors. An ignorant customer is chasing a "cool" camera and prefers a phone with a higher resolution. He does not even know that he is falling for the bait of marketers and salespeople.

What is permission?

Camera resolution is a parameter that indicates the final size of the image. It only determines how large the resulting image will be, that is, its width and height in pixels. Important: the quality of the picture does not change. The photo may turn out to be of low quality, but it is large due to the resolution.

Resolution does not affect quality. It was impossible not to mention this in the context of smartphone camera interpolation. Now you can go directly to the point.

What is phone camera interpolation?

Camera interpolation is an artificial increase in the resolution of an image. That is, images, not That is, this is special software, thanks to which a picture with a resolution of 8 megapixels is interpolated to 13 megapixels or more (or less).

By analogy, camera interpolation is similar to binoculars. These devices enlarge the image, but do not make it better or more detailed. So if interpolation is indicated in the specifications for the phone, then the actual resolution of the camera may be lower than stated. This is neither bad nor good, it just is.

What is it for?

Interpolation was invented to increase the size of the image, nothing more. This is now a ploy by marketers and manufacturers who are trying to sell a product. They indicate in large numbers on the advertising poster the resolution of the phone's camera and position it as an advantage or something good. Not only does the resolution itself not affect the quality of photos, it can also be interpolated.

Literally 3-4 years ago, many manufacturers were chasing the number of megapixels and in various ways tried to cram sensors with as many sensors as possible into their smartphones. This is how smartphones appeared with cameras with a resolution of 5, 8, 12, 15, 21 megapixels. At the same time, they could take pictures like the cheapest soap dishes, but buyers, having seen the sticker "18 Mp Camera", immediately wanted to buy such a phone. With the advent of interpolation, selling such smartphones has become easier due to the possibility of artificially adding megapixels to the camera. Of course, photo quality began to improve over time, but certainly not because of resolution or interpolation, but because of natural progress in terms of sensor and software development.

The technical side

What is camera interpolation in a phone, technically, because all the text above described only the main idea?

With the help of special software, new pixels are "drawn" on the image. For example, to enlarge the image by 2 times, a new line is added after each line of pixels in the picture. Each pixel on this new line is filled with color. The fill color is calculated by a special algorithm. The very first way is to fill a new line with the colors of the nearest pixels. The result of such processing will be terrible, but this method requires a minimum of computational operations.

Another method is most often used. That is, new lines of pixels are added to the original image. Each pixel is filled with color, which, in turn, is calculated as the average of neighboring pixels. This method gives better results, but requires more computational operations.

Fortunately, modern mobile processors are fast, and in practice the user does not notice how the program is editing the image, trying to artificially increase its size.

There are many advanced interpolation methods and algorithms that are constantly being improved: the transition boundaries between colors are improved, the lines become more accurate and clear. It doesn't matter how all these algorithms are built. The very idea of ​​camera interpolation is trivial and is unlikely to take root in the near future. Interpolation cannot make an image more detailed, add new details, or improve it in any other way. Only in movies does a small blurry picture become clear after applying a couple of filters. In practice, this cannot be.

Do you need interpolation?

Many users, unknowingly, ask questions on various forums on how to interpolate the camera, believing that this will improve the quality of images. In fact, interpolation will not only not improve the quality of the picture, but it can even make it worse, because new pixels will be added to the photos, and due to the not always accurate calculation of the colors for filling, there may be undefined areas and grain in the photo. As a result, the quality drops.

So phone interpolation is a marketing gimmick that is completely unnecessary. It can increase not only the photo resolution, but also the cost of the smartphone itself. Do not fall for the tricks of sellers and manufacturers.

Camera interpolation is an artificial increase in the resolution of an image. It is the image, not the size of the matrix. That is, this is special software, thanks to which a picture with a resolution of 8 megapixels is interpolated to 13 megapixels or more (or less). By analogy, camera interpolation is like a magnifying glass or binoculars. These devices enlarge the image, but do not make it better or more detailed. So if interpolation is indicated in the specifications for the phone, then the actual resolution of the camera may be lower than stated. This is neither bad nor good, it just is.

Interpolation was invented to increase the size of the image, nothing more. This is now a ploy by marketers and manufacturers who are trying to sell a product. They indicate in large numbers on the advertising poster the resolution of the phone's camera and position it as an advantage or something good. Not only does the resolution itself not affect the quality of photos, it can also be interpolated.

Literally 3-4 years ago, many manufacturers were chasing the number of megapixels and in various ways tried to cram sensors with as many sensors as possible into their smartphones. This is how smartphones appeared with cameras with a resolution of 5, 8, 12, 15, 21 megapixels. At the same time, they could take pictures like the cheapest soap dishes, but buyers, having seen the sticker "18 Mp Camera", immediately wanted to buy such a phone. With the advent of interpolation, selling such smartphones has become easier due to the possibility of artificially adding megapixels to the camera. Of course, photo quality began to improve over time, but certainly not because of resolution or interpolation, but because of natural progress in terms of sensor and software development.

What is camera interpolation in a phone, technically, because all the text above described only the main idea?

With the help of special software, new pixels are "drawn" on the image. For example, to enlarge the image by 2 times, a new line is added after each line of pixels in the picture. Each pixel on this new line is filled with color. The fill color is calculated by a special algorithm. The very first way is to fill a new line with the colors of the nearest pixels. The result of such processing will be terrible, but this method requires a minimum of computational operations.

Another method is most often used. That is, new lines of pixels are added to the original image. Each pixel is filled with color, which, in turn, is calculated as the average of neighboring pixels. This method gives better results, but requires more computational operations. Fortunately, modern mobile processors are fast, and in practice the user does not notice how the program is editing the image, trying to artificially increase its size. smartphone camera interpolation There are many advanced interpolation methods and algorithms that are constantly being improved: the transition boundaries between colors are improved, the lines become more accurate and clear. It doesn't matter how all these algorithms are built. The very idea of ​​camera interpolation is trivial and is unlikely to take root in the near future. Interpolation cannot make an image more detailed, add new details, or improve it in any other way. Only in movies does a small blurry picture become clear after applying a couple of filters. In practice, this cannot be.
.html







2021 gtavrl.ru.