By S. Scott Grizzle M.A
Over the past 18-plus years, many customers and colleagues have asked me how to improve the quality of their online videos. The first thing I tell them is to get out of the “it’s-only-for-the-web mentality”. Online viewership has exploded in recent years, its acceptance has surpassed TV’s adoption over the same time span in its development. Think quality first, and remember that a show for the web is now the same as a show for TV. Think about how Netflix, Hulu and Ustream have accepted the use of HD Streaming and the highest quality vs going the low bitrate highly compressed “it’s only for the web” video.
Once you’re in that mind-set, there are more ways to improve quality. First, know your equipment and how to adjust for optimal output. Second, adjust your white-and-black balances during acquisition. Finally, raise your contrast, saturation, and complexity during encoding. By doing these things, you can make your video look great and differentiate yourself from everyone else.
A major element of great-looking webcasts or streaming media is content acquisition. Without a decent camera and knowledge of its characteristics, the video will look unimpressive. The first step is getting to know your camera. There isn’t one brand that is best—that’s totally subjective and up to personal preference—but simply knowing your camera’s brand will tell you quite a bit about its characteristics. It doesn’t matter if you are doing a single-camera or a multi-camera shoot. Just knowing the brand and its characteristics of how each one differs allows you to set up the camera properly. These differences can affect the white balance and color saturation.
You might notice how one camera’s reds look brighter and appear to “pop,” while another brand’s reds are dull but the blues appear to “pop”. This can be true whether the camera is professional or prosumer. One brand of cameras might use complementary metal oxide semiconductors (CMOS) while another brand uses charge- coupled devices (CCD) sensors.
Making the CMOS/CCD Decision
The difference between CMOS and CCD isn’t something most people need to know or will ever care about. However, understanding how each one works and the differences between them can tell you which filters you should be using and how light is affected. One good question to ask is, “What is CMOS, and why is it good for a camera and my production?” CMOS sensors use multiple transistors to amplify and move the charge provided by incoming photons of light, enabling the pixels to be read individually. One of the advantages of CMOS is its low power consumption. CMOS sensors can consume up to 100 times less power than CCDs. Because CCDs are essentially capacitive devices, they need external control signals and large clock swings to achieve acceptable charge transfer efficiencies.
Another advantage of CMOS is that it is adaptable for high frame rates and resolutions. Most CMOS sensors are set at HD resolutions, but since their resolutions are so high and their sensors can access just the pixels of a region (area of interest), they allow high frame rate. Basically, reducing the resolution allows the higher frame rates. This is a real advantage of CMOS. Now, you can set your camera up right and use it in high frame rate applications. One last thing that is good about CMOS is that the sensors don’t have artifacts, smear, or blooming. They have a clean, high-quality image.
However, like everything else in life, CMOS sensors have their drawbacks. First, they aren’t as sensitive to light as your typical CCD sensors. So if you’re in a low-light setting, you might run into problems.
Another issue is that these sensors usually don’t have infrared (IR) filters installed on them. In industrial applications, this is more common. But without an IR filter, your colors will be skewed. Basically, your spectrum will be adjusted so that your greens will be brown. But this is an easy fix—simply add a filter. Many CMOS sensors use a Bayer filter that passes red, green, or blue light to selected pixels. Lastly, CMOS is considered to be noisier than CCDs, but, in most cases, this is only noticeable on test equipment. However the way CCD’s are designed they can create really high quality images with low noise (graininess)
The alternative to CMOS is the older standard, CCD, and three-CCD solutions. Many people know the term CCD, but what is it? Well, again, it’s not a known-or-fail thing, but it helps to understand the complete workflow. The CCD is a solid-state chip that turns light into electric signals. In a full-frame device, all of the image area is active, and there is no electronic shutter. A mechanical shutter must be added to this type of sensor or the image will smear as the device is clocked or read out. The sensors respond to 70% of the incident light, making them far more efficient than photographic film, which captures only about 2% of the incident light.
Most common types of CCDs are sensitive to infrared light, which allows zero lux (or near-zero lux) video recording. Many CCDs, like CMOS sensors, use a Bayer mask over the CCD. Each square of four pixels has one filtered red, one blue, and two green. The result of this is that luminance information is collected at every pixel, but the color resolution is lower than the luminance resolution.
Better color separation can be reached by three-CCD devices (3CCD). Each of the three CCDs in these devices is arranged to respond to a particular color. Another advantage of 3CCD over a Bayer-mask device is higher efficiency (and, therefore, higher light sensitivity for a given aperture size). This is because, in a 3CCD device, most of the light entering the aperture is captured by a sensor, while a Bayer mask absorbs a high proportion of the light falling on each pixel.
Neither technology has a clear advantage in quality. CMOS can potentially be implemented with fewer components; they use less power and provide data faster than CCDs. But more devices are using 3 CCD and 3 CMOS or 3MOS (Panasonic) sensors. The sensors are becoming more and more of a commodity since they are being used on cell phones, gaming devices and anything else you can think of besides your typical camera.
Which Camera is right for you? (Camcorder/ DSLR/ Other)
All cameras have their advantages and disadvantages. It doesn’t matter if they are a DSLR, camcorder, dock-able, or box (industrial) camera. The typical camcorder is the video professional’s bread and butter. Camcorders are used in probably 90% of the industry because you can go to tape instantly or to a hard drive. A lot of newer cameras including DSLRs and camcorders are coming with an option to save to SD cards or to a Hard Drive.
In the past few years a lot of video professionals have been starting to use some of the digital DLSR. These are great little cameras that do offer the option to change out lenses. Most have a simple microphone and an option to add in external microphones or audio. The sensors are a high resolution since they are designed for a still picture. How these cameras work is like a traditional film camera and take a lot of still images and stitch them together as a video. Most DSLR’s are great for shorter content due to timecode and frame rate shift. Since these aren't really designed for video but for still images over a period of time you will see the framerate fluctuate. This is a very common issue and why you rarely see DSLRs used in longer form content.
The most common video camera for all video is good old fashioned camcorder (camera recorder) you can find these from consumer to professional. Start off with the sensors you can find these with CCD or C(MOS[1]) vs 3CCD or 3CMOS (3MOS). From the smaller in consumer cameras of 1/16in up to 1 1/2in in truly professional HD (broadcast grade) camera and everything in between. All of these give you an on camera microphone but auxiliary audio inputs also. Besides battery power they do have direct power too.
Camcorders are all over the place for recording and Delivery now. Some come with built in storage or hard drives while others you can record to tape or SD cards besides the HDMI or HD-SDI connector. Some of the newer camcorders have built in encoders that work with Wi-Fi or an IP Ethernet cable connection on the back. (This is becoming more commonplace in broadcast due to ip is easier to configure and route longer term than ASI[2] and can Multiplex more content and handle higher bitrates than SDI) Some of the higher end camcorders can also be controlled by a Camera Control Unit (CCU). So besides controlling the camera via all toggles and the menus you can change the settings remotely with the CCU and shade the camera.
Another trend in the camera movement is to go away from camcorders to dockable cameras. Dockable’s offer the advantage of changeable recording formats. Maybe you have an event that requires a videotape recorder (VTR) back or camera control units. Obviously, a dockable offers greater flexibility in these situations. They aren’t the cheapest route to go up front, but one camera can do the job of three or more camera formats. So in the long run, if you can afford this solution, it can pay for itself relatively quickly.
If the dockable price is too high or you want to go for a smaller-sized camera, the box (industrial) camera is a great option. You can set these up to be controlled remotely or in a studio configuration. Their prices, in most cases, are affordable, and you can more than likely use the lenses you already have. The major drawback is that there is no camera back or VTR that is already on board or that can be attached. But then again, you’re not format-dependent. You can use a VTR of any format, and you can also hook up a CCU to give you more control over the black levels and color saturation.
If you have never used a CCU before, it allows you to control almost every aspect of the camera. So when should you use a CCU and when should you go straight to tape? Most people associate CCUs with broadcast or multi camera shoots. The truth is that it’s up to your standards. If you’re satisfied with the default white balance, black balance, and filters, then go straight to tape. If you are not and want more, then look into CCUs. That said, not all cameras can use CCUs. The trend over the last few years has been to allow professional camcorders to be attached to CCUs.
Some box cameras are better than others, and the price will reflect that. But you can’t beat their size for travel, and their cost is usually lower than dockable. However, you can’t attach a VTR to the back of them. You can set them up in studio configurations with viewfinders and lens-control systems. But since you might want to use CCUs, the video will then go to your real VTR, where it will be processed by your personal settings. Yes, you can set filters and the pedestal. Some box cameras even have options to control remotely with pan and zoom. This is great, except then you need to do multiple multi camera shoots, and you may not have the staffing to do them all.
Camera resolutions are vitally important. Gone are of the Day of old 8mm film and SD home videos and TV. HD Video and streaming Is here & here to stay. It’s very easy to get a camera that can output 1080 or 720p resolution. The price of the cameras are getting cheaper and easier to find. Any new camera in the past few years should offer a minimum of a 720p output and some of the newer ones will give you a 1080p. It's even possible to find UHD 4K resolution camera that can stream and record. All of these will offer you an HDMI output and some of the more professional will offer you an HD-SDI output option.
On last thing on Cameras. This is 2016 and not 1980. So the gear you buy should be progressive vs interlaced. You will still find some consumer level gear that has the interlaced variation but this is completely stupid when for the same price you can get progressive. You aren’t transmitting over television transmitters and the televisions or monitors you are going to sending to or viewing on are designed to handle progressive video. Plus this article is about the internet and streaming to the internet. Video on the internet is Progressive and not interlaced. So unless you are grabbing content off the air and need to deinterlace your content just buy a Progressive HD resolution camera.
Getting Your Settings Straight
When it comes to balancing your whites and blacks, some of you may already be thinking, “I just use the on-board white balance. I flip the toggle and wait for the screen to tell me it is complete.” However, there is a real value to doing a more detailed white balance and black balance. The defaults on most cameras are fine, but if your camera or equipment allows it, tweaking the white and black balances allows your colors to pop and makes your picture appear warmer or cooler, depending on how you set them up. Most people do a quick white balance, which allows skin tone to look OK. The camera’s colors are going to be close to the right range, but there is no guarantee that they will be exactly where they should be. When adjusting the color range, most people forget to adjust the black balance, but this is just as important as the white balance. Without setting your black levels, it’s difficult to truly set your white balance. The black level is the base. In most cases, the white level is set to 100 IRE, while the black level is set to 7.5 IRE. The reason black is set to 7.5 IRE is because blanking is set to zero. This is done so no one will constantly see the line constantly in the video. With LCD and digital displays there is no blanking, so black can be set to zero. The tools you can use to set your blacks and whites are called waveform monitors and vectorscopes. It is key to know about these tools, IRE, illegal blacks and whites, and how to make your videos appear cold and warm.
Waveform monitors and vectorscopes allow a person to see where whites, blacks, and the full-color spectrum fit. I bet you would be surprised where your default settings are and where your colors are with each day-to-day use. You may even have software versions of waveform monitors and vectorscopes on your nonlinear editing system. You use them and probably don’t know it.
Basically, waveform monitors and vectorscopes are oscilloscopes that are designed to be used in the video environment. Waveform monitors are used to see where black levels and white levels are, while vectorscopes work with chrominance information. The vectorscope shows where the colors should be. Typically, this is where you would use your color bar chart. On the scope are red, magenta, blue, cyan, green, and yellow. So you can see that by using these two devices, with proper adjustment, your colors can be dead-on and give you the extra pop that most companies’ videos just don’t have.
IRE, named for the Institute of Radio Engineers, is a unit of measurement in the video world that was developed to measure the amplitude of video signals. For the general standard these settings are as follows: White is 100 IRE and black is 7.5. However, with more recent digital devices and displays, the blanking frequency has been removed, and now, black can be set to zero. But doing this can cause your videos, when played on non-digital devices, to have issues. For example, if switching, you might hear a pop in the audio with each dissolve or cut. This can be amplified during the encoding process since it is noise on an audio track. Also, your colors’ brightness may be different on the two settings. Setting the black to zero is known as super black or enhanced black.
Illegal whites and blacks, which cannot be reproduced by traditional CMYK printing, are a bad thing. Whenever you hear the term illegal, it’s never good. What the term means is that your color settings are out of the norm. With illegal whites and blacks, or even just one or the other, all of your colors become illegal. Your colors will be either blooming or crushed, so your skin tone and other colors just won’t look good. This is why you should take control of your camera’s settings. When you use the automatic setting on your camera, the settings are approximations and not true settings. So your black settings may be at 3 IRE or even 15 IRE, while your whites may be at 70 to 80 IRE or even above 100. Both are bad.
Some great tools you can use to prepare your settings are a Society of Motion Picture and Television Engineers (SMPTE) color chart and warm cards. The SMPTE color chart is considered the broadcast standard for setting colors and has multiple shades inside of it. When you look on a vectorscope, you will be considerably more accurate. The warm cards are a nice tool that very few people use. They are white balance cards, but they are not pure white and are set for different levels of lighting and filaments. If you use them, you can shoot in fluorescent lighting and your whites will look white and not green. So with these cards, you can make your video warmer or colder as you see fit.
Warm and cold videos are easy to define. Warm has a tendency to be on the red side, while cold is on the blue side. Again, choosing which way to go is really up to personal preference. However, you need to know your audience before you make the choice. Do you think your audience is on CRTs or on LCD and plasma displays? Since CRTs are tube-based and tend to run warm, you might want to make your videos on the cooler side. If your audience is more web- based or even HD-based, they are on the cooler side, so you will probably want to work on the warmer side. There are some exceptions. Again, this is where you need to know your equipment. Are your cameras on the warm or cold side? What looks better to you? How is your contrast?
Encoding for Maximum Quality
Now that acquisition and color correction have been covered, the final element for great-looking video on the web is complexity and contrast. Since the original days of streaming media, we have been looking for ways to make videos look better. In the early days, there were only a couple of codecs, and live and real-time encoding was difficult. We could add filters, but more importantly, they gave us control of contrast. As mentioned earlier, televisions and computer monitors used different technology, and as such, the contrast would be different. So using the aforementioned programs, you can tweak web videos by adding more contrast and Saturation than you would for normal videos. What this will do is give colors more saturation, which will make them seem fuller and brighter. If your contrast is too low, your video will look washed out, dull, and a little more gray. If you look at most web videos, this is the case. So in many cases, adding extra contrast will cause your videos to have some pop.
Another option to help make your video look better is encoding it in high complexity. High complexity can be used in both live and transcoding workflows. Using high complexity is basically giving the encoder more information to work with. It’s like shooting the same content at the same angle with both HD and SD cameras. It’s the same information, but the user has more to work with (not to mention a completely different frame shape!).
Now, that is an extreme example. From doing my own lab tests, there isn’t any major noticeable file size difference. But there is a definite image quality difference. I know it can be hard to tell at first glance, especially if your largest video size is 720x480. But when you start doing 1280x720 and above, it’s much more noticeable. So now you’re saying, “If this is such a great thing, why isn’t everyone doing it?” The answer is CPU Cycles and Time. As such, you must choose between speed and quality. You may have to put more CPU cores or in some cases a GPU behind it. Like the Nvidia Cuda or some of the newer Intel Chips come with one installed if you use their encoding sdk’s.
But I come from the school of doing the highest quality work possible. Why invest all this time and creative ability only to kill your work in the encoding process? Again, this is a personal judgment. Back in 2000, my crew and I did about 160-plus hours of footage per week. We then had to edit it and get it up on the web with Synchronized Multimedia (SYMM) technology. We always went for the highest quality and the largest- sized videos. Most of our competitors were doing videos that were 240x180 or smaller. Our smallest was 360x240, and our average was 480x360. Which by today's standards was tiny but back then the internet was still being adopted. We did have some customers requested even larger videos, such as 640x480. More recently I was shooting HD and UHD 4K and testing some encoding techniques for that. We always did high-contrast and high-complexity work. This always gave us a competitive advantage over our competitors. We never lost a show or lost a bid because of quality.
Summary
To improve your quality and give yourself a competitive advantage over the other guys, you should know your equipment and how it operates. You don’t need to be an engineer and know every facet, but be familiar with how your equipment works and the tweaks needed to make your video stand out. After you know your equipment, remember to make sure you adjust your white and black levels during acquisition. By doing this, your color saturation will be where it should be and not an approximation. Lastly, take advantage of adding to your contrast and encoding in high complexity. Encoding with high contrast will allow your colors to have a deeper saturation that will make your video seem to have more pop. Using high complexity will make your video appear to be clearer and encoded at a higher bitrate than it actually was.
Just remember to know your audience. Who are they and how will they be viewing your streams? Will they be watching on a mobile phone or on an 80in TV? Setting the production correctly in the beginning will make greater looking content when the video is scaled down and transcoded.
Tutorial: Improving Video Color Quality Published 2008 Streaming Media Magazine
Comments