|
Posted by Richard Crowley on 10/05/83 11:52
"Martin Heffels" wrote ...
> Ptravel wrote:
> Data drop-out is the equivalent of generation-loss in
> digital duplication. Of course, the laywer that you are,
> we are going to have a semantics battle about this :-)
> But I use it in terms of popular semantics, which is not
> always strictly correct for those who follow the letter
> of the law :-)
IMHO, Mr. Tauger and Mr. Heffels are both correct. The
problem is that we are still using "analog" terminology
even here in the digital age.
>> Look up the meaning of drop out. It has nothing to
>> do with codecs.
Correct. They are two completely idependent factors
in the digital path.
> We were talking about generation-loss from multiple
> recompression. A couple of years ago I tried this with
> the MS-DV25 codec, and after the second recompression,
> you already saw some serious loss in quality. The MS-
> DV25 codec has improved tremendously by now, but it
> still can't hold up a candle to the ones of Canopus and Matrox.
Note that decompression and recompression only happen
if you go through an analog step. Either by using an analog
connection to copy/transfer the signal, or within the NLE
process when you modify the video in some manner title,
transition effect, etc.) The nice thing about DV (and other
digital formats) is that simple copying does NOT involve
decompression and recompression, so dubbing is affected
only by dropouts, etc.
>> You can dupe a D-25 tape 18 times and the 18th
>> copy will be identical to the first.
Not by engineering standards. It is highly unlikely that
the low-grade error detection and correction used in
DV the tape format could read and write 18 sequential
dubs with 100% accuracy. OTOH, if you were to say
that usually you can't *see* any anomolies from an 18-
generation DV dub, that is a different matter.
>> Yeah -- one is tape, one isn't. However, the statement
>> stands: error correction is used for both media,
Note that the error detection, and particularly the error
correction mechanisms used for *data* are substantially
more rigorous and effective than those used for *media*.
This applies to Red-Book audio CDs (vs. data CD-ROM)
as well as for DV tape vs. a DV-AVI computer file. This
is, for example, the reason you cannot store as much info
on a CD-ROM as you can on an audio CD. I have many
examples of audio CDs I have made where the raw WAV
tracks won't fit on the same disc because of the extra
overhead from the (Orange Book0 data format.
>> and the chances of drop out are miniscule on either.
One of the dangers of digital storage and transmission
is that we start taking it for granted. The only reason
digital storage and transmission is better than analog
is that the data is more predictable (either "1" or "0")
so it is easier to tell when something went wrong.
Simulataneously, inexpensive and powerful integrated
cicuits are available which can do real-time error
detection and correction (in the case of data) or error
detection and mitigation (in the case of audio/video media).
> No. A hard-disk moves data to another block if it
> thinks the block is corrupt.
There are a great deal more layers/levels of error
detection and correction than that.
> But there is no chance that the operating system knows
> if the data is correct. It can only make an educated guess.
Actually the operating system has nearly foolproof
ways of telling whether the data is correct (checksums,
etc.) They are used as additional layers of error detection
and correction on top of those used by the disc drive.
> With tape there are some extra bits and pieces send
> with the data which can repair the data a bit. But
> sometimes this goes wrong as well.
Indeed. There are complex error detection mechanisms
used on DV tape. Uncorrected video errors are most
frequently seen as "pixelization" where the data isn't
good enough to resolve down to the pixel level, so
the hardware "notches down" to the next available
size (4 or 16 or 32 or 64 pixels, etc.)
Navigation:
[Reply to this message]
|