| 
	
 | 
 Posted by Jan B on 04/15/06 13:29 
On Thu, 13 Apr 2006 17:16:03 GMT, "news.cup.hp.com" 
<thomasDELME_gilgDELME@hpDELME.com> wrote: 
 
>Someone wrote: 
>> Have you *ever* studied how DVI, HDMI, *or* component video work?  They 
>> output one pixel at a time, and the display changes one pixel at a time. 
>> Just like a CRT. 
>> 
>> Moron. 
> 
>Video connectors necessarily act like a serial or parallel data cable, but  
>in the digital age, capture and display devices do not necessarily operate  
>"one pixel at a time". 
> 
>CCD video cameras can essentially grab a whole frame within 1/60 of a second  
>or far faster, like 1/5000 of a second. The results of the nearly  
>instantanous frame grab are often placed into a frame buffer, and it is from  
>the frame buffer that a slower read process can serialize the bits (pixels)  
>over say an HDMI cable. You should also read about progressive segmented  
>frames (http://en.wikipedia.org/wiki/Progressive_segmented_Frame) as a  
>further example of how frame buffers between the CCD and video connector may  
>be necessary. 
> 
>Ditto on the display side. Read http://en.wikipedia.org/wiki/LCD to see how  
>different types of LCD displays load their pixels. Many LCDs these days load  
>pixels a row at a time. 
 
This pussles me a little. 
My understanding from above is that it takes some time for an LCD 
panel to update the complete picture, row by row. 
But how long time does it take from top to bottom? 
 
The original video (TV) method (don't know if still used in studio 
cameras) used tubes that were scanned in the same way as a 50/60Hz CRT 
monitor. The motion portrayal would be correct this way as the 
scanning is the same at "both ends". 
 
Now consider an LCD sensor of the type that integrates and stores one 
frame (or half frame) nearly instantaneous. 
 
When such a camera pans horisontally over a vertical object like a 
pole and the video is displayed on a CRT that takes nearly a field 
period to scan from top to bottom I would expect to perceive a tilted 
object "moving" across the screen. This is since the scanning at the 
bottom would lag the top when displaying but not at the recording.  
 
Have anybody noticed such effects? 
 
The opposite would be visible if the video is shot with the scanning 
method and was displayed with instantaneously (or to fast) refresh one 
complete frame/field at a time. 
 
The arguments for introducing strobing and row-by-row illumination for 
LCD panels (refer to Philips "Clear LCD" development) is that our 
brain gets disturbed by the "sample-and-hold" effect as we try to 
follow what should be a smooth motion.  
(The row-by-row syncronised ON/OFF illumination would work around a 
slow pixel response time.) 
 
The arguments says that the strobing/deacy from a CRT is better from 
this aspect but I have a problem understanding how that can help when 
there is a lag in the strobing between top and bottom parts of the 
picture and that lagging is different from what is shot in the film 
frame . 
 
Hope somebody can explain. 
/Jan
 
  
Navigation:
[Reply to this message] 
 |