Download: A New Standardized Method for Objectively Measuring Video Quality

A New Standardized Method for Objectively Measuring Video Quality Margaret H Pinson and Stephen Wolf NTIA pioneered perception-based video quality Abstract— The National Telecommunications and Information measurement in 1989 [1]. Subsequently, other organizations Administration (NTIA) General Model for estimating video have performed major research efforts [2]-[10]. NTIA’s quality and its associated calibration techniques were research has focused on developing technology independent independently evaluated by the Video Quality Experts Group (VQEG) in their Phase II Full Reference Television (...
Author: Naruaki Sugahara Shared: 7/30/19
Downloads: 311 Views: 2679

Content

A New Standardized Method for Objectively Measuring Video Quality

Margaret H Pinson and Stephen Wolf NTIA pioneered perception-based video quality Abstract— The National Telecommunications and Information measurement in 1989 [1]. Subsequently, other organizations Administration (NTIA) General Model for estimating video have performed major research efforts [2]-[10]. NTIA’s quality and its associated calibration techniques were research has focused on developing technology independent independently evaluated by the Video Quality Experts Group (VQEG) in their Phase II Full Reference Television (FR-TV) test. parameters that model how people perceive video quality. The NTIA General Model was the only video quality estimator These parameters have been combined using linear models to that was in the top performing group for both the 525-line and produce estimates of video quality that closely approximate 625-line video tests. As a result, the American National subjective test results. With the assistance of other Standards Institute (ANSI) adopted the NTIA General Model organizations (e.g., VQEG), NTIA has collected data from 18 and its associated calibration techniques as a North American independent video quality experiments. The resulting 2944 Standard in 2003. The International Telecommunication Union (ITU) has also included the NTIA General Model as a normative subjectively rated video sequences were all sampled according 1 method in two Draft Recommendations. This paper presents a to ITU-R Recommendation BT.601 [11]. This wide variety description of the NTIA General Model and its associated of input scenes and transmission systems has enabled NTIA to calibration techniques. The independent test results from the develop robust, technology independent parameters and video VQEG FR-TV Phase II tests are summarized, as well as results quality models. from eleven other subjective data sets that were used to develop This paper provides a description of the National the method. Telecommunications and Information Administration (NTIA) Index Terms— Video Quality, Image Quality, objective testing, General Model for estimating video quality and its associated subjective testing. calibration techniques (e.g., estimation and correction of spatial alignment, temporal alignment, and gain/offset errors). The General Model was metric H in the Video Quality I. INTRODUCTION Experts Group (VQEG) Phase II Full Reference Television TH advent of digital video compression, storage, and (FR-TV) tests [12]. These algorithms have been standardized E transmission systems exposed fundamental limitations of by the American National Standards Institute (ANSI) in the techniques and methodologies that have traditionally been updated version of T1.801.03 [13], and have been included as used to measure video performance. Traditional performance a normative method in two International Telecommunication parameters relied on the "constancy" of a video system’s Union (ITU) recommendations [14][15]. performance for different input scenes. Thus, one could inject The General Model was designed to be a general purpose a test pattern or test signal (e.g., a static multi-burst), measure video quality model (VQM) for video systems that span a very some resulting system attribute (e.g., frequency response), and wide range of quality and bit rates. Extensive subjective and be relatively confident that the system would respond objective tests were conducted to verify the performance of similarly for other video material (e.g., video with motion). the General Model before it was submitted to the VQEG However, modern digital video systems adapt and change Phase II test. While the independent VQEG Phase II FR-TV their behavior depending upon the input scene and the tests only evaluated the performance of the General Model on operational characteristics of the digital transmission system MPEG-2 and H.263 video systems, the General Model was (e.g., bit-rate, error rate). Therefore, attempts to use input developed using a wide variety of video systems and thus scenes that differ from what is actually used in-service (i.e., should work well for many other types of coding and the actual user’s video) can result in erroneous and misleading transmission systems (e.g., bit rates from 10 kbits/s to 45 results. Mbits/s, MPEG-1/2/4, digital transmission systems with errors, analog transmission systems, and tape-based systems, utilizing both interlace and progressive video). Manuscript received November 19, 2003. This work was supported by the U.S. Department of Commerce. M. H. Pinson is with the Institute for Telecommunication Sciences, Boulder, CO 80305 USA (phone: 303-497-3579; fax: 303-497-3680; e-mail: 1 A common 8-bit video sampling standard that samples the luminance (Y) email is hidden). channel at 13.5 MHz, and the blue and red color difference channels (CB and S. Wolf is with the Institute for Telecommunication Sciences, Boulder, CO CR) at 6.75 MHz. If 8 bits are used to uniformly sample the Y signal, Rec. 80305 USA (e-mail: email is hidden). 601 specifies that reference black be sampled at 16 and reference white at 235., The General Model utilizes reduced-reference technology has been calculated, the spatial shift is removed from the [16] and provides estimates of the overall impressions of processed video stream (e.g., a processed image that was video quality (i.e., mean opinion scores, as produced by shifted down is shifted back up). panels of viewers). Reduced-reference measurement systems For interlaced video, this may include reframing of the utilize low-bandwidth features that are extracted from the processed video stream as implied by comparison of the source and destination video streams. Thus, reduced- vertical field one and field two shifts. Reframing occurs when reference systems can be used to perform real-time in-service either the earlier field moves into the later field and the later quality measurements (provided an ancillary data channel is field moves into the earlier field of the next frame (one-field available to transmit the extracted features), a necessary delay), or when the later field moves into the earlier field and attribute for tracking dynamic changes in video quality that the earlier field of the next frame moves into the later field of result from time varying changes in scene complexity and/or the current frame (one-field advance). Reframing impacts transmission systems. The General Model utilizes reduced- spatial alignment for both 525-line video (e.g., NTSC) and reference parameters that are extracted from optimally-sized 625-line video (e.g., PAL) identically and causes the field-two spatial-temporal (S-T) regions of the video sequence. The vertical shift (in field lines) to be one greater than the field- General Model requires an ancillary data channel bandwidth one vertical shift (in field lines). of 9.3% of the uncompressed video sequence, and the Spatial alignment must be determined before the processed associated calibration techniques require an additional 4.7%. valid region (abbreviated as PVR and defined as that portion The General Model and its associated calibration of the processed video image which contains valid picture techniques comprise a complete automated objective video information), gain and level offset, and temporal alignment. quality measurement system (see Fig. 1). The calibration of Specifically, each of those quantities must be computed by the original and processed video streams includes spatial comparing original and processed video content that has been alignment, valid region estimation, gain & level offset spatially registered. If the processed video stream were calculation, and temporal alignment. VQM calculation spatially shifted with respect to the original video stream and involves extracting perception-based features, computing this spatial shift were not corrected, then these other video quality parameters, and combining parameters to calibration estimates would be corrupted. Unfortunately, construct the General Model. This paper will first provide a spatial alignment cannot be correctly determined unless the summary description of each process (the reader is referred to PVR, gain and level offset, and temporal alignment are also [17] for a more detailed description). Finally, test results from known. The interdependence of these quantities produces a eleven subjective data sets and the independent VQEG FR-TV “chicken or egg” measurement problem. Calculation of the Phase II tests will be presented. spatial alignment for one processed field requires that one know the PVR, gain and level offset, and the closest matching original field. However, one cannot determine these quantities until the spatial shift is found. A full exhaustive search over all variables would require a tremendous number of computations if there were wide uncertainties in the above quantities. Specifying the above quantities too narrowly could result in spatial alignment errors. The solution presented here performs an iterative search to find the closest matching original frame for each processed frame.2 An initial baseline (i.e., starting) estimate for vertical shift, horizontal shift, and temporal alignment is computed for one processed frame using a multi-step search. The first step is a broad search, over a very limited set of spatial shifts, whose purpose is to get close to the correct matching original frame. Gain compensation is not considered in the broad search, and PVR is set to exclude the over-scan portion of the picture which, in most cases, will eliminate invalid video from being Fig. 1. Block Diagram of entire VQM used. The second step is a broad search for the approximate spatial shift, performed using a more limited range of original frames. The broad search for spatial shift covers II. SPATIAL ALIGNMENT approximately two dozen spatial shifts. Fewer downward The spatial alignment process determines the horizontal and vertical spatial shift of the processed video relative to the 2 When operating on interlaced video, all operations will consider video original video. The accuracy of the spatial alignment from each field separately; when operating on progressive video, all algorithm is to the nearest 0.5 pixel for horizontal shifts and to operations will consider the entire video frame simultaneously. For simplicity, the calibration algorithms will be entirely described for progressive the nearest line for vertical shifts. After the spatial alignment video, this being the simpler case., shifts are considered, since these are less likely to be ambiguous. Because the pattern of vertical lines repeats, the encountered in practice. The third step performs localized horizontal shift is ambiguous, two or more horizontal shifts spatial-temporal searches to fine-tune the spatial and temporal being equally acceptable. estimates. Each fine search includes a small set of spatial Therefore, the iterative search algorithm should be applied shifts centered around the current spatial alignment estimate to a sequence of processed frames. The individual estimates and just three frames temporally, centered around the current of horizontal and vertical shifts from multiple processed best matching original frame. The zero shift condition is frames are then median-filtered to produce more robust included as a safety check that helps prevent the algorithm estimates. Using the 50th percentile point allows the most from wandering and converging to a local minimum. This likely horizontal and vertical shift to be chosen. This third step iterates up to five times. If these repeated fine algorithm consistently produces a horizontal spatial alignment searches fail to find a stable result (i.e., a local minimum), the accuracy that is good to the nearest 0.5 pixels.3 Spatial shift above procedure is repeated using a different processed frame. estimates from multiple sequences or scenes may be further This produces a baseline estimate that will be updated combined to produce an even more robust estimate for the periodically, as described below. Hypothetical Reference Circuit (HRC)4 being tested, assuming The spatial alignment algorithm calculates the spatial that the spatial shift is constant for all scenes passing through alignment for each of a series of processed frames at some the HRC. specified frequency (e.g., one frame every half-second). The spatial alignment algorithm described above requires a Using the baseline estimate as a starting point, the algorithm relatively high ancillary data channel bandwidth, due to the performs alternate fine searches (as described above) and pixel-by-pixel comparison of original and processed frames. estimations of the luminance gain and level offset. To This could impact the design of an in-service quality calculate the luminance gain and level offset, the mean and monitoring application. Fortunately, each piece of video standard deviation of the original and processed frames are transmission equipment (i.e., encoder, decoder, or analog compared, using the current spatial and temporal alignment transmission) will normally have one constant spatial estimates. This simple calculation has a robust performance in alignment. If the hardware had a changing or variable spatial the presence of alignment errors. If the baseline estimate is alignment, the transmitted video would appear to move up and correct or very nearly correct, three fine searches will down – an unacceptable degradation that would be quickly normally yield a stable result. If a stable result is not found, addressed by the manufacturer. most likely the spatial shift is correct but the temporal shift estimate is off (i.e., the current estimate of temporal shift is III. PROCESSED VALID REGION (PVR) more than two frames away from the true temporal shift). In Video sampled according to ITU-R Recommendation this case, a broad search for the temporal shift is conducted BT.601 [11], henceforth abbreviated as Rec. 601, may have a that includes the current best estimate of spatial shift. This border of pixels and lines that does not contain a valid picture. broad search will normally correct the temporal shift estimate. The original video from the camera may only fill a portion of When the broad search for the temporal shift completes, its the Rec. 601 frame. A digital video system that utilizes output is used as the starting point, and up to five repeated compression may further reduce the area of the picture in fine searches are performed, alternating with luminance gain order to save transmission bits. If the non-transmitted pixels and level offset calculations. If this second repeated fine and lines occur in the over-scan area of the television picture, search fails to find a stable result, then spatial alignment has the typical end-user should not notice the missing lines and failed for this frame. If a stable result has been found for this pixels. If these non-transmitted pixels and lines occur in the frame, the spatial shifts (i.e., horizontal and vertical) are stored displayed picture area, the viewer may notice a black border and the baseline estimate is updated. around the displayed images, since the video system will For some processed frames, the spatial alignment algorithm normally insert black into this non-transmitted picture area. could fail. Usually, when the spatial alignment is incorrectly Video systems (particularly those that perform low-pass estimated for a processed frame, the ambiguity is due to filtering) may exhibit a ramping up from the black border to characteristics of the scene. Consider, for example, a digitally the picture area. These transitional effects most often occur at created progressive scene containing a pan to the left. the left and right sides of the image but can also occur at the Because the pan was computer generated, this scene could top or bottom. Occasionally, the processed video may contain have a horizontal pan of exactly two pixels every frame. several lines of corrupted video at the top or bottom that may From the spatial alignment search algorithm’s point of view, it not be visible to the viewer (e.g., VHS tape recorders can would be impossible to differentiate between the correct corrupt several lines at the bottom of the picture in the over- spatial alignment computed using the matching original frame, and a two pixel horizontal shift computed using the frame that 3 Spatial alignment to the nearest 0.5 pixels is sufficient for the video occurs one frame prior to the matching original frame. For quality measurements described herein. another example, consider an image consisting entirely of 4 The term HRC is used here to denote one instantiation of a video digitally perfect black and white vertical lines. Because the transmission system which may include an encoder, a digital transmission image contains no horizontal lines, the vertical shift is entirely system, and a decoder. HRC is a generic term commonly used by standards bodies to protect the anonymity of video equipment suppliers., scan area). IV. GAIN & LEVEL OFFSET To prevent non-picture areas from influencing the VQM A prerequisite for performing gain and level offset measurements, these areas are excluded from the VQM calibration is that the original and processed images be measurement. Since the behavior of some video systems is spatially registered. The original and processed images must scene dependent, the valid region should ideally be calculated also be temporally registered, which will be addressed later. using actual video streams. In this case, PVR should be Gain and level offset calibration can be performed on either calculated for each scene separately. After PVR has been fields or frames as appropriate. calculated, the invalid pixels are discarded from the original The method presented here makes the assumption that the and processed video sequences. Rec. 601 Y, CB, and CR signals each have an independent gain The automated valid region algorithm estimates the valid and level offset. This assumption will in general be sufficient region of the processed video stream so that subsequent for calibrating component video systems (e.g., Y, R-Y, B-Y). computations do not consider corrupted lines at the top and However, in composite or S-video systems, it is possible to bottom of the Rec. 601 frame, black border pixels, or have a phase rotation of the chrominance information since transitional effects where the black border meets the picture the two chrominance components are multiplexed into a area. The core algorithm starts with an assumption that the complex signal vector with amplitude and phase. The outside edges of each processed frame contain invalid video. algorithm presented here will not properly calibrate video The extent of this invalid region is set empirically, based upon systems that introduce a phase rotation of the chrominance observations of actual video systems. For 525-line video information (e.g., the hue adjustment on a television set). In sampled according to Rec. 601, the default invalid region addition, since a linear estimation algorithm is utilized, excludes 6 pixels/lines at the top, left and right, and 4 lines at excessive gains that cause pixel levels to be clipped will cause the bottom. The PVR algorithm begins by setting the PVR to estimation errors unless the algorithm is modified to allow for exclude this default invalid region. The pixels immediately this effect. inside the current valid region estimate are then examined. If The valid regions of the original and processed frames are the average pixel value is black or ramping up slowly from divided into small, square sub-regions, or blocks. The mean black, then the valid region estimate is accordingly decreased over space of the [Y, CB, CR] samples for each corresponding in size. By repeating this examination, the valid region is original and processed sub-region are computed to form iteratively diminished in size. spatially sub-sampled images. To temporally register a The stopping conditions can be fooled by scene content. processed frame (with spatial shift held constant), the standard For example, an image that contains genuine black at the left deviation of each (original minus processed) difference image side (i.e., black that is part of the scene) will cause the core is computed using the sub-sampled Y luminance frames. For algorithm to conclude that the left-most valid column of video a given processed frame, the temporal shift that produces the is farther toward the middle of the image than it ought to be. smallest standard deviation (i.e., most cancellation with the For that reason, the core algorithm is applied to multiple original) is chosen as the best match. A first order linear fit is images from the processed video sequence and the largest used to compute the relative gain and offset between the sub- observed PVR (with some safety margin added) is used for the sampled original and processed frames. This linear fit is final PVR estimate. The coordinates of the PVR are applied independently to each of the three channels: Y, CB, transformed via the spatial alignment results, so that the PVR and CR. specifies the portion of the original video that remains valid. The algorithm described above should be applied to This automated valid region algorithm works well to multiple matching original and processed frame pairs estimate the valid region of most scenes. Due to the nearly distributed at regular intervals throughout the video sequence. infinite possibilities for scene content, the algorithm takesaAmedian filter is then applied to the six time histories of the conservative approach to estimation of the valid region. A level offsets and gains to produce average estimates for the manual examination of valid region would quite likely choose scene. If the level offset and gain is constant for all scenes a larger region. Conservative valid region estimates are more that have passed through a given HRC, then measurements suitable for an automated video quality measurement system, performed on each scene can be filtered (across all the scenes) because discarding a small amount of video will have little to increase robustness and accuracy. The overall HRC level impact on the quality estimate and in any case this video offset and gain results can then be used to compensate all of usually occurs in the over-scan portion of the video. On the the processed video for that HRC. other hand, including corrupted video in the video quality Although gain and level offsets are calculated for the CB calculations may have a large impact on the quality estimate. and CR channels, these correction factors are not applied. The The valid region algorithm can also be applied to the General Model utilizes only the luminance or Y channel gain original video sequence. The resulting original valid region and level offset correction factors. Changes to the CB and CR (OVR) increases the accuracy of the processed valid region color channels are considered impairments for which the calculation, by providing a maximal bound on the PVR. system under test should be penalized., V. TEMPORAL ALIGNMENT distributed. Patterns in the histogram can provide insights into Modern digital video communication systems typically the system under test, such as an indication of changing or require several tenths of a second to process and transmit the variable delay. Delay measurements from still and nearly video from the sending camera to the receiving display. motionless portions of the scene are not used, since the Excessive video delays impede effective two-way original images are nearly identical to each other. communication. Therefore, objective methods for measuring The delay indicated at the final stage of the algorithm may end-to-end video communications delay are important to end- be different from the delay a viewer might choose when users and service providers for specification and comparison aligning the scenes by eye. Viewers tend to focus on motion, of interactive services. Video delay can depend upon dynamic aligning the high motion parts of the scene, where the frame- attributes of the original scene (e.g., spatial detail, motion) and based algorithm chooses the most often observed delay over video system (e.g., bit-rate). For instance, scenes with large all of the frames that were examined. These overall delay amounts of motion can suffer more video delay than scenes histograms are then examined to determine the extent and with small amounts of motion. Thus, video delay statistics of any variable video delay present in the HRC. measurements should ideally be made in-service to be truly representative and accurate. Estimates of video delay are VI. AN OVERVIEW OF FEATURE AND PARAMETER required to temporally align the original and processed video CALCULATION METHODS streams before making quality measurements. A quality feature in the context of this algorithm is defined Some video transmission systems may provide time as a quantity of information associated with, or extracted synchronization information (e.g., original and processed from, a spatial-temporal sub-region of a video stream (either frames may be labeled with some kind of timing information). original or processed). The feature streams are a function of In general, however, time synchronization between the space and time that characterize perceptual changes in the original and processed video streams must be measured. This spatial, temporal, and chrominance properties of video section presents a technique for estimating video delay based streams. By comparing features extracted from the calibrated upon the original and processed video frames. The technique processed video with features extracted from the original is “frame-based” in that it works by correlating lower video, quality parameters can be computed that are indicative resolution images, sub-sampled in space and extracted from of perceptual changes in video quality. the original and processed video streams. This frame-based Viewed conceptually, all of the features used by the technique estimates the delay of each frame or field (for General Model perform the same steps. A perceptual filter is interlaced video systems). These individual estimates are applied to the video stream to enhance some property of combined to estimate the average delay of the video sequence. perceived video quality, such as edge information. After this To reduce the influence of distortions on temporal perceptual filtering, features are extracted from spatial- alignment, original and processed images are spatially sub- temporal (S-T) sub-regions using a mathematical function sampled and then normalized to have unit variance. Each (e.g., standard deviation). Finally, a perceptibility threshold is individual processed image is then temporally registered using applied to the extracted features. the technique presented for the gain & level offset algorithm All perceptual filters operate on frames within a calibrated (i.e., find the original image that minimizes the standard video sequence. Thus, the pixels in original and processed deviation of the difference between the original and processed images outside of the PVR have been discarded, the processed images). This locates the most similar original image for each sequence has been spatially registered, the processed processed image. luminance Y images have been gain/level offset compensated, However, it is not the identity of the original image that is and the processed sequence has been temporally registered. of interest, but rather the relative delay between the original All features operate independently of image size (i.e., S-T and processed images (e.g., in seconds or frames). The delay region size does not change when the image size changes).5 measurements from a series of images are combined into a Each perceptual filter distinguishes some aspect of video histogram, which is then smoothed. If a bin near one end of quality. The luminance image plane contains information the histogram contains a large number count, then the pertinent to edge business and noise. An edge enhanced temporal alignment uncertainty was too small and the entire version of the luminance Y image plane more accurately temporal alignment algorithm should be re-run with a larger identifies blurring, blocking, and other large-scale edge temporal uncertainty. Otherwise, the maximum smoothed effects. The color image planes, CB and CR, are useful for histogram bin indicates the best average temporal alignment identifying hue impairments and digital transmission errors. for the scene. This counting scheme produces an accurate Time differencing consecutive luminance Y image planes estimate for the average delay of a video sequence. highlights jerky or unnatural motion. Unlike the previous calibration algorithms, the temporal After the original and processed video streams have been alignment algorithm examines every processed video frame. Some of these individual temporal alignment measurements This independence of S-T region size and image size has only been tested may be incorrect but those errors will tend to be randomly for standard definition television, including CIF and QCIF sequences. We, perceptually filtered, the video streams are divided into However, most features use either the ratio comparison abutting S-T regions. S-T region sizes are described by (1) function the number of pixels horizontally, (2) the number of frame p = ( f − f ) / f (3) lines vertically, and (3) the time duration of the region. Sincepoothe processed video has been calibrated, for each processed S- or the log comparison function T region there exists a corresponding original S-T region. ⎛ f p ⎞ Features are extracted from each of these S-T regions usingap= log10⎜⎜ ⎟⎟ (4) f simple mathematical function. The two functions that work ⎝ o ⎠ best are mean, which measures the average pixel value, and where fo and fo2 are original feature values; fp and fp2 are the standard deviation, which estimates the spread of pixel values. corresponding processed feature values. After feature extraction, the temporal axis no longer relates to These visual masking functions imply that impairment individual frames. The temporal extent of the S-T regions perception is inversely proportional to the amount of localized determines the sample rate of the feature stream. This sample spatial or temporal activity that is present. In other words, rate cannot exceed the frame rate. spatial impairments become less visible as the spatial activity Finally, some feature values are clipped to prevent them increases (i.e., spatial masking), and temporal impairments from measuring impairments that are imperceptible. This become less visible as the temporal activity increases (i.e., clipping is of the form: temporal masking). fclip = max( f ,threshold) (1) The ratio and log comparison functions produce a mixture of positive and negative values, where positive numbers where f is the feature before clipping, threshold is the indicate gains, and negative numbers indicate losses. Greater clipping threshold, and fclip is the feature after clipping. Since measurement accuracy can be obtained by examining losses clipping is applied to both the original and processed feature and gains separately. The fundamental reason is that humans streams, this clipping serves to reduce sensitivity to generally react more negatively to additive impairments (e.g., imperceptible impairments. blocking which produces extra edges) than subtractive Where quality features quantify some perceptual aspect of impairments (e.g., blurring which produces a loss of edge one video stream, quality parameters compare original and sharpness) and hence losses and gains must be given different processed features to obtain an overall measure of video weights in the quality estimator. Therefore, the ratio and log distortion. Viewed conceptually, all of the parameters used by comparison functions are always followed by either a loss the General Model perform the same steps. First, the function (i.e., replace positive values with zero) or a gain processed feature value for each S-T region is compared to the function (i.e., replace negative values with zero). corresponding original feature value using comparison The parameters from the S-T regions form three- functions that emulate the perception of impairments. Next, dimensional arrays spanning the temporal axis and two spatial perception-based error-pooling functions are applied across dimensions (i.e., horizontal and vertical). For the spatial space and time. Error pooling across space will be referred to collapsing step, impairments from the S-T regions with the as spatial collapsing, and error pooling across time will be same time index are pooled using a spatial collapsing function referred to as temporal collapsing. Sequential application of (e.g., mean, standard deviation, or rank-sorting with percent the spatial and temporal collapsing functions to the streams of threshold selection). Spatial collapsing yields a time history S-T quality parameters produces single-value quality of parameter values. Extensive investigations performed by parameters for the entire video sequence, which is nominally 8 NTIA revealed that the optimal spatial collapsing function to 10 seconds in duration. 6 The final space-time collapsed often involves some form of worst case processing, such as parameter values may also be scaled and clipped to account taking the average of the worst 5% of the distortions observed for nonlinearities and to better match the parameter’s at that point in time. This is because localized impairments sensitivity to impairments with the human perception of those tend to draw the focus of the viewer, making the worst part of impairments. the picture the predominant factor in the subjective quality The perceptual impairment at each S-T region is calculated decision. using comparison functions that have been developed to The parameter time history results from the spatial model visual masking of spatial and temporal impairments. collapsing function are next pooled using a temporal Some features use a comparison function that performs a collapsing function to produce an objective parameter for the simple Euclidean distance between two original and two video sequence. Viewers use a variety of temporal collapsing processed feature streams. functions. For example, the mean over time is indicative of p = ( fo − fp)2 + ( f f )2 (2) the average quality that is observed during the time period. o2 − p2 The 90% level for a gain parameter’s time history is indicative of the worst additive transient impairment that is observed expect high definition television (HDTV) to exhibit this independence as well, (e.g., digital transmission errors may causea1to 2 second but this has not been tested. disturbance in the processed video). 6 Most of the video sequences that were used to develop the General Model The all-positive or all-negative temporally collapsed were from 8 to 10 seconds in duration., parameters may be scaled to account for nonlinear The si_loss parameter is calculated by performing the relationships between the parameter value and perceived following seven steps: quality. It is preferable to remove any nonlinear relationships 1) Apply the SI13 filter to each luminance image. before building the video quality models, since the linear 2) Divide each video sequence into 8 pixelx8line x 0.2 least-squares algorithm will be used to determine the optimal second S-T regions. This is the optimal S-T region size parameter weights. Two nonlinear scaling functions that for the si_loss parameter [18]. might be applied are the square root function, and the square 3) Compute the standard deviation of each S-T region. function. If the square root function is applied to an all- 4) Apply a perceptibility threshold, replacing values less than negative parameter, the parameter is first made all positive 12 with 12. (i.e., absolute value taken). 5) Compare original and processed feature streams (each Finally, a clipping function might be applied to reduce the computed using steps 1 through 4) using the ratio parameter’s sensitivity to small amounts of impairment. This comparison function (see equation 3) followed by the loss clipping function for positive parameters is: function. 6) Spatially collapse by computing the average of the worst ⎧ 0 if p ≤ t p'= (5) (i.e., most impaired) 5% of S-T blocks for each 0.2 second ⎨ ⎩ p − t otherwise slice of time. 7) Temporally collapse by sorting values in time and where t is the threshold. selecting the 10% level. Since the parameter values are all When designing individual parameters, the specific details negative, this is a form of worst-case temporal processing. of each step are established by analyzing subjectively rated video. For example, threshold from equation (1) and p from B. Parameter “hv_loss” equation (5) are set to values that maximize the correlation The hv_loss parameter detects a shift of edges from between the quality parameter and subjective video quality horizontal & vertical orientation to diagonal orientation, such ratings. Thus, the specific details of the General Model as might be the case if horizontal and vertical edges suffer parameters were chosen to best emulate human perception. more blurring than diagonal edges. This parameter uses the horizontally and vertically filtered images (H and V) output VII. GENERAL MODEL PARAMETERS from the SI13 filter. Two new perceptually filtered images are The General Model contains seven independent parameters. created: one contains horizontal and vertical edges (HV) and Four parameters are based on features extracted from spatial the other contains diagonal edges (HVBAR, or complement of gradients of the Y luminance component, two parameters are HV). An edge angle is computed for each pixel by taking the based on features extracted from the vector formed by the two four-quadrant arctangent of the SI13 filtered H and V pixel (C , C ) chrominance components, and one parameter is values. The HV image contains values where the angle isBRbased on the product of features that measure contrast and within 0.225 radians of horizontal or vertical, and zero motion, both of which are extracted from the Y luminance otherwise. The HVBAR image contains values where the angle component. The seven parameters are computed as described indicates a diagonal edge, and zero otherwise. Pixels with an below. SI13 magnitude value less than 20 are not used (i.e., replaced with zero), because the angle calculation is unreliable. A. Parameter “si_loss” The hv_loss parameter is calculated by performing the Parameter si_loss detects a decrease or loss of spatial following nine steps: information (e.g., blurring). This parameter uses a 13 pixel 1) Apply the HV and HVBAR perceptual filters to each spatial information filter (SI13) that has a peak response at luminance plane. approximately 4.5 cycles/degree (when Rec. 601 video is 2) Divide each of the HV and HVBAR video sequences into 8 viewed at a distance of 6 times picture height). The SI13 filter pixelx8line x 0.2 second S-T regions. This is the was specifically developed to measure perceptually significant optimal S-T region size for the hv_loss parameter [18]. edge impairments [17]. An alternate method for extracting 3) Compute the mean of each S-T region. edges is the Sobel filter, but the 3 pixel Sobel filter detects 4) Apply a perceptibility threshold, replacing values less than details so fine that people may not care if they are blurred. 3 with 3. SI13 utilizes 13 pixel by 13 pixel horizontal and vertical filter 5) Compute the ratio (HV / HVBAR). masks. These two filter masks are created by horizontal and 6) Compare original and processed feature streams (each vertical replication of the following vector: computed using steps 1 through 5) using the ratio comparison function (see equation 3) followed by the loss [-.0052625, -.0173446, -.0427401, -.0768961, -.0957739, - function. .0696751, 0, .0696751, .0957739, .0768961, .0427401, 7) Spatially collapse by computing the average of the worst .0173446, .0052625] 5% of blocks for each 0.2 second slice of time. The horizontal and vertical filters are separately applied to 8) Temporally collapse by taking the mean over all time the luminance image. The resulting filtered images (IH and IV) slices. are combined into a single image (ISI13) using Euclidean 9) Square the parameter (i.e., non-linear scaling), and clip at distance (i.e., square root of the sum of the squares). a minimum value of 0.06 (see equation 5)., Due to the non-linear scaling, the values associated with 5) Set all values greater than 0.14 equal to 0.14 to prevent parameter hv_loss are all positive, rather than being all excessive quality improvements of more than about one- negative as is the case for the other loss metric, si_loss. third of a quality unit when multiplied by the parameter weight (see section VIII). One-third of a quality unit is C. Parameter “hv_gain” the maximum improvement observed in the subjective This parameter detects a shift of edges from diagonal to data that was used to develop this parameter. Thus, an horizontal & vertical, such as might be the case if the HRC will only be rewarded for a small amount of edge processed video contains tiling or blocking artifacts. enhancement. The si_gain parameter is a relative 1) Perform steps 1 through 5 from parameter hv_loss. enhancement in quality for systems that perform contrast 2) Compare original and processed feature streams using the enhancement with respect to systems that don’t perform log comparison function (see equation 4) followed by the contrast enhancement. In Section VIII, VQM will be gain function. clipped to prevent the si_gain parameter from producing 3) Spatially collapse by computing the average of the worst processed quality estimates better than the original. 5% of blocks for each 0.2 second slice of time. F. Parameter “ct_ati_gain” 4) Temporally collapse by taking the mean over all time slices. The perceptibility of spatial impairments can be influenced by the amount of motion that is present. Likewise, the D. Parameter “chroma_spread” perceptibility of temporal impairments can be influenced by This parameter detects changes in the spread of the the amount of spatial detail that is present. A feature derived distribution of two-dimensional color samples. from the product of contrast information and temporal 1) Divide the CB and CR color planes into separate 8 pixelx8information can be used to partially account for these linex1frame S-T regions. interactions. The ct_ati_gain metric is computed as the 2) Compute the mean of each S-T region. Multiple the CR product of a contrast feature, measuring the amount of spatial means by 1.5 to increase the perceptual weighting of the detail, and a temporal information feature, measuring the red color component in the next step. amount of motion present in the S-T region. Impairments will 3) Compare original and processed feature streams CB and be more visible in S-T regions that have a low product than in CR using Euclidean distance (see equation 2). S-T regions that have a high product. This is particularly true 4) Spatially collapse by computing the standard deviation of blocks for each 1-frame slice of time. of impairments like noise and error blocks. ct_ati_gain 5) Temporally collapse by sorting the values in time and identifies moving-edge impairments that are nearly always selecting the 10% level, and then clip at a minimum value present, such as edge noise. of 0.6. Since all values are positive, this represents a best- 1) Apply the “absolute value of temporal information” (ATI) case processing temporally. Thus, chroma_spread motion detection filter to each luminance plane. ATI is measures color impairments that are nearly always the absolute value of a pixel-by-pixel difference between present. the current and previous video frame. Steps 1 and 2 essentially sub-sample the C and C image 2) Divide each video sequence into 4 pixelx4line x 0.2BRplanes. Just as si_loss, hv_loss, and hv_gain examine edges second S-T regions. containing enough pixels to be perceptually significant, 3) Compute the standard deviation of each S-T region. 4) Apply a perceptibility threshold, replacing values less than chroma_spread performs coherent integration (i.e., CB and 3 with 3. CR treated as a vector) of color samples over an area large 5) Repeat steps 2 through 4 on the Y luminance video enough to have significant perceptual impact. sequence (without perceptual filtering) to form “contrast” E. Parameter “si_gain” feature streams. This is the only quality improvement parameter in the 6) Multiply the contrast and ATI feature streams. 7) Compare original and processed feature streams (each model. The si_gain parameter measures improvements to computed using steps 1 through 6) using the ratio quality that result from edge sharpening or enhancements. comparison function (see equation 3) followed by the gain The si_gain parameter is calculated by performing the function. following five steps: 8) Spatially collapse by computing the mean of each 0.2 1) Perform steps 1 through 3 from si_loss. second slice of time. 2) Apply a perceptibility threshold, replacing values less than 9) Temporally collapse by sorting values in time and 8 with 8. selecting the 10% level. The parameter values are all 3) Compare original and processed feature streams (each positive, so this temporal collapsing function is a form of computed using steps 1 and 2) using the log comparison best-case processing, detecting impairments that are nearly function paired with the gain function. always present. 4) Spatially and temporally collapse by computing the average of all blocks, and then clip at a minimum value of G. Parameter “chroma_extreme” 0.004. These steps estimate the average overall level of This feature uses the same color features as the edge enhancement that is present. chroma_spread metric, but different spatial-temporal, collapsing functions. Chroma_extreme detects severe If VQM > 1.0, then VQM = (1 + c)*VQM / (c + VQM), localized color impairments, such as those produced by digital where c = 0.5. transmission errors. VQM computed in the above manner will have values 1) Perform steps 1 through 3 from chroma_spread. greater than or equal to zero and a nominal maximum value of 2) Spatially collapse by computing for each slice of time the one. VQM may occasionally exceed one for video scenes that average of the worst 1% of blocks (i.e., rank-sorted values are extremely distorted. from the 99% level to the 100% level), and subtract from that result the 99% level. This identifies very bad IX. PERFORMANCE distortions that impact a small portion of the image. 3) Temporally collapse by computing standard deviation of The fundamental purpose of the General Model and the the results from step 2. associated calibration routines is to track subjective video quality scores. This ability will be demonstrated by VIII. GENERAL MODEL comparing General Model results with subjectively rated video clips. This section describes how to compute the General Model using the calculated parameter values. The General Model is A. Training Data optimized to achieve maximum objective to subjective The General Model was developed using subjective and correlation using a wide range of video quality and bit rates. objective test data from eleven different video quality The General Model has objective parameters for measuring experiments. These eleven subjective experiments were the perceptual effects of a wide range of impairments such as conducted from 1992 to 1999. All of the data sets were blurring, block distortion, jerky/unnatural motion, noise (in collected in accordance with the most recent version of ITU-R both the luminance and chrominance channels), and error Recommendation BT.500 [19] or ITU-T Recommendation blocks (e.g., what might typically be seen when digital P.910 [20] that was available when the experiment was transmission errors are present). This model consists of a performed. All of the data sets used scenes from 8 to 10 linear combination of the video quality parameters described seconds in duration. Nine of the data sets (i.e., data sets one in section VII. The General Model produces output values to nine) used double stimulus testing where viewers saw both that range from zero (no perceived impairment) to the original and processed sequences. Two of the data sets approximately one (maximum perceived impairment). The (i.e., data sets ten and eleven) used single stimulus testing General Model values may be multiplied by 100 to where viewers saw only the processed sequence. Seven of the approximately scale results to the Difference Mean Opinion data sets were primarily television experiments (i.e., data sets Score (DMOS) derived from the 100-point double stimulus one to seven) while four of the data sets were primarily continuous quality scale (DSCQS). The General Model was videoconferencing experiments (i.e., data sets eight to eleven). designed using Rec. 601 video that was subjectively evaluated The subjective scores from each of the subjective data sets at a viewing distance of four to six times picture height. have been linearly mapped onto a common scale with a The General Video Quality Model (VQM) consists of the nominal range of [0,1] using the iterative nested least squares following linear combination of the seven parameters given in algorithm (INLSA) [17] [21] [22] and the seven parameters section VII: from the General Model. The reader is directed to [17] for VQM = - 0.2097 * si_loss more complete descriptions of these subjective experiments. + 0.5969 * hv_loss Taken together, these experiments include 1536 + 0.2483 * hv_gain subjectively rated video sequences. Fig. 2 shows the scatter + 0.0192 * chroma_spread plot of subjective quality versus VQM, where each data set’s - 2.3416 * si_gain video sequences are plotted in a different color (1 = black, 2 = + 0.0431 * ct_ati_gain red, 3 = green, 4 = blue, 5 = yellow, 6 = magenta, 7 = cyan, 8 + 0.0076 * chroma_extreme = gray, 9 = dark red, 10 = copper, 11 = aquamarine). The Y- Note that si_loss is always less than or equal to zero, so axis of Fig. 2 shows the subjective common scale. The si_loss can only increase VQM. Since all the other overall Pearson linear correlation coefficient between parameters are greater than or equal to zero, si_gain is the subjective quality and VQM for the video sequences plotted in only parameter that can decrease VQM. Fig. 2 is 0.948. After the contributions of all the parameters are weighted and added up, VQM is clipped at a lower threshold of 0.0. This prevents si_gain values from producing a quality rating that is better than the original (i.e., a negative VQM). Finally, a crushing function that allows a maximum 50% overshoot is applied to VQM values over 1.0. The purpose of the crushing function is to limit VQM values for excessively distorted video that falls outside the range of the subjective data used to develop the model., objective video quality metrics. To that end, VQEG performed the FR-TV Phase II test, in 2001 to 2003 [12]. The VQEG FR-TV Phase II tests provided an independent evaluation of the ability of video quality models and their associated calibration algorithms to reproduce subjective scores. This test contained two experiments, one restricted to 525-line video and the other restricted to 625-line video. The subjective testing was performed by three independent labs. The subjective data and VQM for these two experiments are plotted in Fig. 4 and Fig. 5. In the 525-line test, the General Model was one of only two models that performed statistically better than the other models tested. The Pearson linear correlation was 0.938, and the outlier ratio 0.46.7 In the 625-line test, the General Model was one of four models that performed statistically better than the other models. The Pearson linear correlation was 0.886, and the outlier ratio 0.31. No model performed statistically better than the General Model in either the 525-line or 625-line test. All other models Fig. 2. Training data: clip subjective quality vs. clip VQM. performed statistically worse than the General Model in either the 525-line or the 625-line test or both. Fig. 3 shows the effect of averaging over scenes to produce a single subjective score (i.e., HRC subjective quality) and objective score (i.e., HRC VQM) for each video system. HRC subjective quality is indicative of how the system responds (on average) to a set of video scenes. The overall Pearson linear correlation coefficient between HRC subjective quality and HRC VQM for the data points in Fig. 3 is 0.980. For making video system (i.e., HRC) comparisons, the estimate of HRC subjective quality provided by HRC VQM is more accurate than the estimate of clip subjective quality provided by clip VQM. This can be seen by comparing the amount of scatter in Fig. 3 with the amount of scatter in Fig. 2. Fig. 4. 525-line VQEG FR-TV Phase II test data: clip subjective quality vs. clip VQM. The data depicted in these two scatter plots are identical to that reported in [17]. The VQM scores from the General Model plotted in the VQEG graphs are not exactly equivalent to VQM values as given in section VIII. This is because the VQEG FR-TV Phase II data analysis applied a logistic transformation to each objective metric to remove non- linearities that might be present. However, the logistic transformation had only a minor impact on the VGM values, because these values exhibited a near-linear relationship to the VQEG FR-TV Phase II subjective test data. If the logistics transformation is not performed, the Pearson linear Fig. 3. Training data: HRC subjective quality vs. HRC VQM. correlations are 0.930 for the 525-line test and 0.865 for the 625-line test. B. Testing Data VQEG provides input to standardization bodies responsible 7 Outliers are data points with an error in excess of twice the standard error for producing International Recommendations regarding of the mean. The “outlier ratio” is the number of outliers divided by the total number of data points., [7] H. Ikeda, T. Yoshida, and T. Kashima, “Mixed variables modeling method to estimate network video quality,” SPIE Video Communications and Image Processing Conference, Lugano, Switzerland, Jul. 8-11 2003. [8] J. Caviedes and F. Oberti, “No-reference quality metric for degraded and enhanced video,” SPIE Video Communications and Image Processing Conference, Lugano, Switzerland, Jul. 8-11 2003. [9] A. Pessoa, A. Falcão, A. Faria, and R. Lotufo, “Video quality assessment using objective parameters based on image segmentation,” IEEE Int. Telecommunications Symposium 98, 1, Brazil, 1998, p. 498-503. [10] A. Watson, J. Hu and J. McGowan, “DVQ: A digital video quality metric based on human vision,” Journal of Electronic Imaging, 10(1), pp 20-29. [11] ITU-R Recommendation BT.601, “Encoding parameters of digital television for studios,” Recommendations of the ITU, Radiocommunication Sector. [12] Video Quality Experts Group (VQEG), “Final report from the Video Quality Experts Group on the validation of objective models of video quality assessment, phase II,” 2003 VQEG. Available: www.vqeg.org [13] ANSI T1.801.03 – 2003, “American National Standard for Fig. 5. 625-line VQEG FR-TV Phase II test data: clip subjective Telecommunications – Digital transport of one-way video signals – quality vs. clip VQM. Parameters for objective performance assessment,” American National Standards Institute. [14] Preliminary Draft New Recommendation “Objective perceptual video quality measurement techniques for digital broadcast television in the

X. CONCLUSION presence of a full reference,” Recommendations of the ITU,

We have presented an overview of a general purpose video Radiocommunication Sector. [15] Draft Revised Recommendation J.144, “Objective perceptual video quality model (VQM) and its associated calibration routines. quality measurement techniques for digital cable television in the

This model has been shown by the VQEG FR-TV Phase II presence of a full reference,” Recommendations of the ITU,

test to produce excellent estimates of video quality for both Telecommunication Standardization Sector. [16] ITU-T Recommendation J.143, “User requirements for objective 525-line and 625-line video systems. In the 525-line test, perceptual video quality measurements in digital cable television,”

VQM was one of only two models that performed statistically Recommendations of the ITU, Telecommunication Standardization

better than the other models submitted for independent Sector. [17] S. Wolf and M. Pinson, “Video quality measurement techniques,” NTIA evaluation. In the 625-line test, VQM was one of four models Report 02-392, June 2002. Available: that performed statistically better than the others. Overall, www.its.bldrdoc.gov/n3/video/documents.htm

VQM was the only model that performed statistically better [18] S. Wolf and M. Pinson, “The relationship between performance and

than the others in both the 525-line and 625-line tests. spatial-temporal region size for reduced-reference, in-service video quality monitoring systems,” in Proc. SCI / ISAS 2001 (Systematics,

Obtaining an average Pearson correlation coefficient over Cybernetics, and Informatics / Information Systems Analysis and

both tests of 0.91, VQM was the only model to break the 0.9 Synthesis), Jul. 2001, pp. 323-328. threshold. As a result, VQM was standardized by ANSI in [19] ITU-R Recommendation BT.500, “Methodology for subjective assessment of the quality of television pictures,” Recommendations of

July 2003 (ANSI T1.801.03-2003), and has been included in the ITU, Radiocommunication Sector. Draft Recommendations from ITU-T Study Group 9 and ITU- [20] ITU-T Recommendation P.910, “Subjective video quality assessment R Working Party 6Q. methods for multimedia applications,” Recommendations of the ITU, Telecommunication Standardization Sector. VQM and its associated automatic calibration algorithms [21] M. Pinson and S. Wolf, “An objective method for combining multiple

have been completely implemented in user friendly software. subjective data sets,” SPIE Video Communications and Image

This software is available to all interested parties via a no-cost Processing Conference, Lugano, Switzerland, Jul. 8-11 2003.

license agreement [23]. [22] S. D. Voran, “An iterated nested least-squares algorithm for fitting multiple data sets,” NTIA Technical Memorandum TM-03-397, Oct. 2002. Available:

REFERENCES www.its.bldrdoc.gov/home/programs/audio/pubs_talks.htm

[23] M. Pinson and S. Wolf, “Video quality metric software, Version 2,” [1] S. Wolf, “Features for automated quality assessment of digitally NTIA Software/Data Product SD-03-396, Volumes 1-5, Oct. 2002. transmitted video,” NTIA Report 264, June 1990. Available at Available: www.its.bldrdoc.gov/n3/video/vqmsoftware.htm www.its.bldrdoc.gov/n3/video/pdf/ntia264.pdf [2] D. Hands, “A basic multimedia quality model,” IEEE Transactions on Multimedia, to be published in 2004. Margaret H. Pinson earned a [3] A. Hekstra et al., “PVQM – A perceptual video quality measure,” Signal B.S. and M.S. in Computer Processing Image Communication 17, 2002, pp. 781-798. Science from the University of [4] S. Winkler and R. Campos, “Video quality evaluation for internet streaming applications,” in Proceedings of SPIE-IS&T Electronic Colorado at Boulder, CO in 1988 Imaging, SPIE Vol. 507, 2003, pp. 104-115. and 1990, respectively. Since [5] C. Lee and O. Kwon, “Objective measurements of video quality using 1988 she has been working as a the wavelet transform,” Optical Engineering v. 42 no 1, Jan. 2003, pp. 265-72. Computer Engineer at the Institute [6] A. Worner, “Realtime quality monitoring of compressed video signals,” for Telecommunication Sciences (ITS), an office of the SMPTE Journal v. 111 no 9, Sept 2002, pp. 373-7. National Telecommunications and Information Administration (NTIA) in Boulder, Colorado. Her goal is to develop, automated metrics for assessing the performance of video systems and actively transfer this technology to end-users, standards bodies, and U.S industry. Her publications are available on-line at www.its.bldrdoc.gov/n3/video/documents.htm. Stephen Wolf received a BS in electrical engineering from Montana State University at Bozeman, Montana in 1979 and an MS in electrical and computer engineering from the University of California at Santa Barbara, California in 1983. From 1979 until 1988, he worked on the design and development of radar signal processing and target recognition techniques, including highly advanced inverse synthetic aperture radar (ISAR) systems for the Naval Weapons Center in China Lake, CA. Since 1988, he has been Project Leader of the Video Quality Research Program at the Institute for Telecommunication Sciences (ITS), an office of the National Telecommunications and Information Administration (NTIA) in Boulder, Colorado. His contributions to the field include numerous papers and three U.S. patents for the development "reduced-reference" video quality measurement systems that emulate human perception. Mr. Wolf is an active participant and contributor to the standardization activities of IEEE, the American National Standards Institute (ANSI), and the International Telecommunication Union (ITU) and has served as Chief Technical Editor for video performance measurement standards and technical reports.]
15

Similar documents

Video Quality Measurement Techniques Stephen Wolf Margaret Pinson report series
NTIA Report 02-392 Video Quality Measurement Techniques Stephen Wolf Margaret Pinson report series NTIA Report 02-392 Video Quality Measurement Techniques Stephen Wolf Margaret Pinson U.S. DEPARTMENT OF COMMERCE Donald L. Evans, Secretary Nancy J. Victory, Assistant Secretary for Communications and
TEA2025B TEA2025D STEREO AUDIO AMPLIFIER
TEA2025B TEA2025D STEREO AUDIO AMPLIFIER DUAL OR BRIDGE CONNECTION MODES FEW EXTERNAL COMPONENTS SUPPLY VOLTAGE DOWN TO 3V HIGH CHANNEL SEPARATION VERY LOW SWITCH ON/OFF NOISE MAX GAIN OF 45dB WITH ADJUST EXTER- POWERDIP 12+2+2 SO20 (12+4+4) NAL RESISTOR SOFT CLIPPING ORDERING NUMBERS: TEA2025B (PDI
1 Edition 21S-FX10F/10Sx/x1x0xNx/x1x0xUx SERVICE MANUAL COLOUR TELEVISION Chassis No.GA-7S 21S-FX10F 21S-FX10S 21S-FX10N MODEL 21S-FX10U FEATURES
TopPage 1 Edition 21S-FX10F/10Sx/x1x0xNx/x1x0xUx SERVICE MANUAL No. S7810621SFX10F COLOUR TELEVISION Chassis No.GA-7S 21S-FX10F 21S-FX10S 21S-FX10N MODEL 21S-FX10U In the interests of user-safety (Required by safety regulations in some countries) the set should be restored to its original condition
LM4780 LM4780 Overture Audio Power Amplifier Series Stereo 60W, Mono 120W Audio Power Amplifier with Mute Literature Number: SNAS193A
LM4780 LM4780 Overture Audio Power Amplifier Series Stereo 60W, Mono 120W Audio Power Amplifier with Mute Literature Number: SNAS193A LM4780 Overture™ Audio January 22, 2010 Power Amplifier Series Stereo 60W, Mono 120W Audio Power Amplifier with Mute General Description Key Specifications The LM4780
HP 2500C Series Printer Service and Support Manual
HP 2500C Series Printer Service and Support Manual Version History Version 2.0 January 1, 1999 Notice The information contained in this document is subject to change without notice. Hewlett-Packard makes no warranty of any kind with regard to this material, including, but not limited to, the implied
HP LaserJet 4000 and 4050 Series Printers Service Manual Manual Part No. 4050 Series Printers
HP LaserJet 4000 and 4050 Series Printers Service Manual Copyright© 1999 Hewlett-Packard Co. Printed in USA HP LaserJet 4000 and Manual Part No. 4050 Series Printers C4251-91003 Service Manual *C4251-91003* Printed on at least 50% Total Recycled Fiber with *C4251-91003* at least 10% Post-Consumer Pa
Cut Sheet Printers Maintenance Manual Model C40D
Cut Sheet Printers Maintenance Manual Model C40D E1195 HP Part No. C4672-90005 Notice Hewlett-Packard makes no warranty of any kind with regard to this material, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose. Hewlett-Packard shall not b
User Manual Model D640 (Printer and Accessories)
HP 5000 Cut Sheet Printers User Manual Model D640 (Printer and Accessories) HP 5000 D640 Cut Sheet Printer User Manual (Printer and Accessories) Hewlett-Packard Company C5620-90024 E0397 Notice Hewlett-Packard makes no warranty of any kind with regard to this material, including, but not limited to,
Service Manual HP DesignJet 2000CP HP DesignJet 2500CP HP DesignJet 2800CP HP DesignJet 3000CP HP DesignJet 3500CP HP DesignJet 3800CP Printers
Service Manual HP DesignJet 2000CP HP DesignJet 2500CP HP DesignJet 2800CP HP DesignJet 3000CP HP DesignJet 3500CP HP DesignJet 3800CP Printers For HP Internal Use Only Warranty WARNING Copyright Hewlett- The information contained in The procedures described in Packard Company 1998 this document is
Combined Service Manual f’- (\’ ( HP LaserJet 4L/ 4ML (C2003A/ C2015A)
Combined Service Manual f’- ‘\ (\’ ( HP LaserJet 4L/ 4ML (C2003A/ C2015A) / HP LaserJet 4P/ 4MP (C2005A/ C2040A) 0 Copyright Hewlett- Warranty WARNING Packard Company 1993 The information contained Electrical Shock Hazard in this document is subject To avoid electrical shock, All Rights Reserved. Re
SERVICE MANUAL HP C2858A/C2859A DRAFTING PLOTTERS
SERVICE MANUAL HP C2858A/C2859A DRAFTING PLOTTERS SERIAL NUMBERS This manual applies directly to HP C2858A and C2859A plotters with serial numbers prefixed USA. For additional information about serial numbers, see SERIAL NUMBER INFORMATION in Chapter 1. HEWLETT-PACKARD COMPANY 1993 16399 W. BERNARDO
Service Supplement HP LaserJet 5P / 5MP / 6P / 6MP Printer (C3150A / C3155A /
Service Supplement HP LaserJet 5P / 5MP / 6P / 6MP Printer (C3150A / C3155A / C3980A / C3982A) Service Supplement HP LaserJet 5P / 5MP Printer (C3150A / C3155A) © Copyright Hewlett- Warranty WARNING Packard Company 1995 The information contained Electrical Shock Hazard in this document is subject To
Service Manual HP LaserJet 4V / 4MV (C3141A / C3142A)
Service Manual HP LaserJet 4V / 4MV (C3141A / C3142A) © Copyright Warranty WARNING Hewlett-Packard Company The information contained Electrical Shock Hazard 1994 in this document is subject to change without notice. To avoid electrical shock, All Rights Reserved. use only supplied power Reproduction
HP Color LaserJet 4500, 4500 N, 4500 DN Printer Service Manual 4500, 4500 N, 4500 DN Printer Service Manual October 1999 Edition Manual Part No.
HP Color LaserJet 4500, 4500 N, 4500 DN Printer Service Manual 4500, 4500 N, 4500 DN Printer Service Manual October 1999 Edition Copyright© 1999 Hewlett-Packard Co. Printed in USA Manual Part No. C4084-91077 *C4084-91077* Printed on at least 50% Total Recycled Fiber with *C4084-91077* at least 10% P
HP Computer Museum www.hpmuseum.net
HP Computer Museum www.hpmuseum.net For research and education purposes only.
LaserJeSteries!1PriiTter [HP 33440) and LaserJeitlPlrinte(rHP33449) CombinedServicMeanual
LaserJeSteries!1PriiTter [HP 33440) and LaserJeitlPlrinte(rHP33449) CombinedServicMeanual _ HP Part No. 33449-90906 Printed in USA * ., First Edition - February 1990 ,, 2- ‘,. ,,,, . , \, ‘ / > Notice HEWLETT-PAC- MA~S NO WARRAN~ OF ~ ~ND ~TH REGARD TO T~S ~TERM, INCLUDING, BUT NOT LIMITED TO, THE I
hp LaserJet 8100 and 8150 print systems and paper handling devices service manual
hp LaserJet 8100 and 8150 print systems and paper handling devices printed on at least 50% total recycled fiber with at least 10% post-consumer paper copyright © 2000 Hewlett-Packard Company printed in USA service manual english *C4265-90907* *C4265-90907* C4265-90907 HP LaserJet 8150 and 8100 Serie
SE R VICE Manual
CDMA PORTABLECELLULARTELEPHONESCH-210 SE R VICE Manual CDMA PORTABLE CELLULAR TELEPHONE CONTENTS 1. General Introduction 2. Specification 3. Installation 4. NAM Programming 5. Product Support Tools 6. Circuit Description 7. Troubleshooting 8. Exploded Views and Parts List 9. PCB Diagrams 10. Electri
FIXED WIRELESS PHONE SE R VICE Manual
FIXED WIRELESS PHONE SCW-F200 SE R VICE Manual FIXED WIRELESS PHONE CONTENTS 1. Specification 2. NAM Programming 3. Data Transfer 4. Setup Method 5. Circuit Description 6. Troubleshooting 7. Exploded Views and Parts List 8. PCB Diagrams 9. Electrical Parts List 10. Block & Circuit Diagrams 10. SCW-F
SERVICE Manual
GSM Mobile Cellular Phone SGH-600 SERVICE Manual GSM Mobile Cellular Phone CONTENTS 1. General Description 2. Circuit Description 3. Specification 4. Manual Adjustment Test Procedure 5. Troubleshooting C /OK / 6. PCB Views 1 . 2ABC 3 DEF 4 GHI 5JKL 6MNO 7. Electrical Parts List 7PQRS 8TUV 9WXYZ 8. E
GSM Mobile Cellular Phone Manual SERVICE
GSM Mobile Cellular Phone SGH-500 Manual SERVICE GSM Mobile Cellular Phone CONTENTS 1. Exploded Views and Parts List 2. Electrical Parts List 3. Block Diagram 4. PCB Views 5. Schematic DiagramsC12ABC 3DEF 4GHI 5JKL 6MNO 7PQRS 8TUV 9WXYZ 5. Schematic Diagrams 5-1 Main Power 5-1 5-2 Memory Power 5-2 5
DUAL BAND Mobile Cellular Phone SGH-2400 SERVICE Manual
DUAL BAND Mobile Cellular Phone SGH-2400 SERVICE Manual DUAL BAND Mobile Cellular Phone CONTENTS 1. Exploded Views and Parts List 2. Electrical Parts List 3. Block Diagrams 4. PCB Diagrams 5. Schematic Diagrams 6. Troubleshooting 1 2ABC 4G 3DH EFI 5 JKL 7 6MPQ NR OS 8TUV 9WXYZ 1. Exploded Views and
GSM Mobile Cellular Phone SGH-800C SERVICE Manual
00-Cover 7/28/99 2:29 PM Page 2 GSM Mobile Cellular Phone SGH-800C SERVICE Manual GSM Mobile Cellular Phone CONTENTS 1. Exploded Views and it’s Part list 2. Electrical Part list 3. Block Diagram 4. PCB Diagram 5. Schematic Diagram 01-Exploded 7/21/99 11:28 AM Page 1 1. Exploded View and its Parts Li
  Circuit Diagrams 5-1 SGH-M100 Intergrated Analog Circuit Diagram
SGH-M100 Inte rgrated Analog Circuit Diagram 5. Circuit Diagrams 5-1 SGH-M100 Intergrated Analog Circuit Diagram SAMSUNG Proprietary-Contents may change without notice 5-1 5-2 SGH-M100 I/F Connector & LCD Circuit Diagram VBat Bp_Vf Vtest SPK2P SPK2N Vext Vext VCCD Debug_Rx Debug_Tx MP_UP_DATA MP_DOW
DUAL BAND Mobile SGH-Q100 by Toko (www.gsm-free.com) Manual SERVICE
DUAL BAND Mobile SGH-Q100 by Toko (www.gsm-free.com) Manual SERVICE Dual Band Mobile Cellular Phone CONTENTS 1. Electrical Parts List 2. Exploded Views and Parts List 3. Block Diagrams 4. PCB Diagrams 5. Flow Chart of Troubleshooting and Circuit Diagrams C DEF ABC GHI MNO JKL PQRS WXYZ TUV 1. SGH-Q1
GPRS GSM TELEPHONE SGH-Q200 Manual SERVICE
GPRS GSM TELEPHONE SGH-Q200 Manual SERVICE GPRS GSM TELEPHONE CONTENTS 1. Electrical Parts List 2. Exploded Views and Parts List 3. Block Diagrams 4. PCB Diagrams 5. Flow Chart of Troubleshooting and Circuit Diagrams 1. SGH-Q200 MAIN Electrical Parts List Level SEC Code Design LOC DESCRIPTIONS 0 SGH
Cordless TelephoneSECRETManual SERVICE
Cordless Telephone SP-R5100SECRETManual SERVICE Cordless Telephone CONTENTS 1. Safety Precautions 2. Specification 3. Function Structures 4. Test Mode 5. Component Pin Array 6. Alignment & Adjustment 7. Trouble Shooting 8. PCB Views 9. Electrical Parts List 10. Exploded Views and Parts List 11. Bloc
HP Mopier 320 System Service Manual English Manual Part No. HP Mopier 320 System C4229-90911 Service Manual
HP Mopier 320 System Service Manual English Copyright© 1998 Hewlett-Packard Co. Printed in USA Manual Part No. HP Mopier 320 System C4229-90911 Service Manual *C4229-90911* Printed on at least *C4229-90911* 50% Total Recycled Fiber with at least 10% Post-Consumer Paper C4229-90911 Service Manual HP
GPRS GSM TELEPHONE SGH-S100 Manual SERVICE
GPRS GSM TELEPHONE SGH-S100 Manual SERVICE GPRS GSM TELEPHONE CONTENTS 1. Electrical Parts List 2. Exploded Views and Parts List 3. Block Diagrams 4. PCB Diagrams 5. Flow Chart of Troubleshooting and Circuit Diagrams 1. SGH-S100 MAIN Electrical Parts List Level SEC Code Design LOC DESCRIPTIONS 0 GH9
SERVICE MANUAL MICRO COMPONENT SYSTEM MODEL XL-55 MODEL XL-55C
SERVICE MANUAL No. S1308XL55//// MICRO COMPONENT SYSTEM MODEL XL-55 XL- 55 Micro Component System consisting of XL- 55 (main unit) and CP- XL55 (speaker system). MODEL XL-55C Illustration XL-55 XL- 55C Micro Component System consisting of XL- 55C (main unit) and CP- XL55 (speaker system). • In the i