• Sejarah Teknologi Recording Sejarah teknologi perekaman yg ada pada saat ini tidak lepas dari penemuan² yg terjadi pada zaman dahulu.Phonograph,hasil kreasi Thomas A. Edison pada tahun 1877,merupakan awal dari sebuah tonggak sejarah dunia rekaman.Dengan tabung silinder (wax cylinder) yg di bungkus oleh material yg halus sejenis lilin yg merupakan media untuk dapat merekam suara kedalam suatu bidang.Untuk melakukan play back,diperlukan alat seperti tajum pada Phonograph yg diguratkan pada silinder tadi.Dan hal tersebut akan menghasilkan getaran suara pada corong Phonograph.

    Pada tahun 1887 ,Emile Berliner, mengembangkan Phonograph menjadi Gramophone.Dari segi bentuk,menjadi seperti CD/flat,tidak banyak mengalami perubahan.Hanya sistem perekamannya saja yg berubah,dari sistem rekam secara vertikal milik Thomas A. Edison,menjadi sistem perekaman secara horizontal.Hal tersebut membuat kedalaman alur silinder menjadi lebih konstan.

    Kemajuan yg pesat terjadi pada tahun 1924.Dengan Victor Orthophonic Phonoautograph,peningkatan kualitas dari proses rekaman yg sudah menggunakan tenaga listrik,serta perubahan pada corong,yg menghasilkan respons frekwensi yg flat.

    Selanjutnya,Valdemar Poulsen memperkenalkan Magnetic Recording,dengan menggunakan Telegraphone,pada tahun 1898.Dengan memanfaatkan energi magnetic,media yg bergerak melewati head perekam memiliki kecepatan yg konstan sehingga menghasilkan suara yg lebih baik dari teknologi sebelumnya.

    Tape Recorder tidak bisa dilepaskan dari sejarah panjang teknologi perekaman.Tape recorder mulai dikembangkan pada tahun 1932 di Jerman.Pita yg semakin kecil dengan suara stereo yg sudah baik,membuat para seniman musik sudah dapat melakukan rekaman dengan dukungan alat yg semakin simple.

    Diakhir 1990-an,Digital Recording sudah mulai menjadi standar industri rekaman.Dan saat ini,di era millennium ini,sebuah pita rekam sudah di gantikan tugasnya oleh harddisk,yg tentunya semakin membuat praktis sebuah proses rekaman.Dan juga corong Phonoautograph yg "disulap" (simsalabim...:D) menjadi speaker dengan teknologo kinetik yg canggih.
    Trus,bagaimana dengan perkembangan Digital Teknologi untuk 10 thn,atau bahkan 100 thn ke depan (pede amat 100 thn lg masih hidup)???Kita ikuti saja perkembangannya....

  • 7 Facts Audiophiles Need to Know About Digital Music Remember back in the 1980s when you purchased your first CD?Whether it was Billy Idol or The Psychedelic Furs, imagine if you had gone home and placed the Sony-manufactured CD in your Panasonic CD player, only to find out that it didn’t work.Or, what if that CD from Virgin Records only had half the sound quality as a CD bought from Best Buy?Believe it or not, this is exactly the current digital music environment in which we live.

    To navigate a digital world without standards, today’s audiophiles must gain some digital music knowledge to optimize their listening experience as they convert their CDs to digital music.
    To understand where we’re headed with today’s digital music, it’s key to understand where we’ve been. All digital music formats are based on the principles discovered by German researchers at the prestigious Fraunhofer Institute.

    In 1987, the Institute began researching high quality digital audio compression. They discovered that by understanding how humans hear music, a particular song could be stripped of excess sounds that were inaudible.The obvious first choice was to remove frequencies too high or too low for the human ear to perceive. However, the more interesting breakthrough was to eliminate “masked” sounds—those sounds that are hidden behind louder sounds.

    During a Jimi Hendrix guitar solo, for instance, drummer Mitch Mitchell may have been producing a lot of noise of his own, but Jimi’s solo masks that sound. Similarly, in a compressed digital song, the hidden pieces of Mitchell’s drumming are removed completely, leaving the illusion of a full musical performance, but reducing the amount of information in the digital file.The effect is analogous to a Hollywood set in a 1950s spaghetti western, where the buildings on main street appear real to the audience but are facades.

    Here are seven facts about digital music that are critical whether you’re planning to install a $100,000 multi-room audio solution or simply enjoying music on your iPod in your car or at the gym.

    There Are Many Flavors of Digital Music: Learn Your Formats

    The end result of the Fraunhofer Institute’s digital audio research was the MP3, or Motion Pictures Expert Group Audio Layer III.This MP3 standard for audio compression first gained a foothold in college dorm rooms in the late 1990s. In 1999, 18-year-old computer geeks weren’t too concerned with sound quality, but now they’ve grown up and so has digital music.

    Many more digital audio formats have since been introduced, including these more popular formats:

    * AAC (Advanced Audio Coding) Developed by Apple and the standard for Apple iTunes.
    * WMA (Windows Media Audio) Developed by Microsoft with encoding support built into Windows XP.
    * AIFF (Audio Interchange File Format) A Professional Apple file format for storing audio files. AIFF files are high quality, uncompressed, audio files that were co-developed by Apple based on Electronic Arts Interchange File Format
    * FLAC (Free Lossless Audio Codec) An “open-source” royalty-free audio format that minimizes compression (2:1 ratio) to maintain CD audio quality
    * ALAC (Apple Lossless Audio Encoder) A CODEC developed by Apple to preserve CD quality at a lossless data compression ration of 2:1.

    This alphabet soup of CODECs can be broken down into two simple subsets: Lossless (ALAC, FLAC, AIFF, WMA Lossless) and Lossy (MP3, AAC, WMA).

    The main advantage of Lossless CODECs is that the file size is reduced by up to 60 percent without sacrificing the CD’s audio integrity. This, however, still requires a sizable amount of computer storage—roughly 200-400 megabytes per CD.As the cost of storage continues to fall, Lossless CODECs provide an ideal way to create a master archive of your CD collection, which can later be burned onto blank CDs or played through high-end digital music servers with little to no audio loss.

    Meanwhile, like the name implies, Lossy CODECs do not preserve the sound quality of the original CD. The advantage, however, is that CDs can be compressed to files 10 to 12 times smaller then the original.That’s key to the iPod revolution—so that a 500 CD collection can easily be compressed and stored on a portable player with 30 gigabytes of memory. MP3 remains the most popular Lossy standard, mainly because all brands of players can decode the ubiquitous CODEC.Microsoft and Apple developed WMA and AAC, respectively, to address perceived deficiencies with MP3, especially at high compression rates, but these CODECs only work on limited brands of hardware.

    Understand Compression Rates to Balance File Size with Audio Quality

    When computer geeks talk about information, they use the term “bitrate” to describe the pieces or “bits” of information that are processed per second.In general, the more “bits” of information included in the digital audio file, the better the audio quality. In turn, the more bits included, the larger the digital file.

    That’s why it’s not enough to simply know the CODEC or format of a digital audio file, especially for Lossy formats. MP3 192 kbps (192,000 bits of information per second) is a far different listening experience than MP3 32 kbps.Digital music sold online through iTunes and Rhapsody is 128 kbps, which is far inferior to CD quality.

    While the average person may not notice this through $29 portable headphones, the washed out bass and limited range will be immediately evident when the song is played through any decent home stereo or car audio system.When choosing a Lossy format, MP3 192 is a nice compromise between file size and audio quality, while MP3 320 is twice the file size, but provides near CD-quality sound.Online music retailers, such as Amazon, eMusic and MusicGiants, offer higher quality digital files, while iTunes recently began carrying certain titles at a higher bitrate.

    Garbage In, Garbage Out: Hardware and Software Used to Process CDs Matters

    There’s a huge difference between a CD ripped by a home computer and one ripped through a professional system.The rips are done with the same CODEC and bitrate, yet the resulting audio experience is substantially different. Not everyone can afford a million-dollar commercial CD and DVD processing system, but there are some steps you can take to optimize the quality of your rip if you are processing a CD on your home computer.

    The most obvious place to start is the type of hardware and software you’re using to process a CD. The type and configuration of CD-ROM drive, for instance, can go a long way to ensure that the data you’re extracting from the CD is clean and complete.
    The first step is to check the drive manufacturer and serial number to confirm that the drive supports audio extraction. Drives that vibrate or increase CD RPM to maintain linear velocity can cause substantial seek errors that translate into pops, clicks and gaps.We’ve tested hundreds of drives, and find Plextor drives to be some of the most stable and accurate.Here are some other basic hardware and software considerations to take into account when ripping CDs at home.

    While these won’t solve your quality challenges completely, they do offer a nearly 70-percent solution.

    * Make sure CD-ROM drivers are up-to-date.
    * Use a computer with a powerful CPU, as ripping and compression are computer intensive activities.
    * Do not run unnecessary programs when ripping, and keep destination hard drive de-fragmented.
    * If you are not comfortable with configuring specialized ripping and encoding software, stick with Apple iTunes or Windows Media Player.
    * If you are going to use more robust-free programs, such as Exact Audio Copy, run some tests to make sure the CODEC output is compatible with your digital player. For instance, a program, such as dBpowerAMP, tries to imitate ALAC (Apple Lossless), which results in playback problems on some popular digital players.
    * Listen to the output of a first few CDs before dedicating the next six months of weekends to process your collection. Do it right the first time.Once you’ve determined optimum settings for ripping on your computer and have successfully ripped at least one CD, you generally should not have to change them.

    Computers Are Not Designed to Rip CDs: Consider Error Correction Software

    Even with the ideal CD-ROM drive, hardware and software, computers are not originally designed for audio extraction.An audio CD player reads data in a continuous manner, with its laser following a smooth track. Computers, on the other hand, read information in blocks.So, blocks of audio data are read from random sectors and then written to new random sectors on the computer’s hard drive.

    As you can imagine, all this disjointed sector reading and writing can leave out important information, or add unwanted new data. Based on the principle of garbage in, garbage out, desktop computers, laptops, music servers and over-the-counter carousels and mini-robots repackaged for CD ripping oftentimes produce inferior digital tracks even when CDs are new and Lossless CODECs are used.

    The key is to use error-correction algorithms that read overlapping blocks, compare them, discard the inconsistencies and reread if necessary to confirm that the data on the original audio CD matches the extracted data.This is especially important when processing used CDs, since 20 years of CD abuse makes error correction essential during the ripping process.

    For audiophiles, there are freeware programs, such as CD Paranoia, that provide relatively strong error correction.

    If you’re using iTunes and are like most people, you might not realize that a rudimentary error correction program was built into iTunes that can be turned on. Go to EDIT>PREFERENCES>ADVANCED>IMPORTING and select “use error correction when importing CDs.”Note, however, that error correction is both drive and CPU intensive, which means that it could take two to three times longer to rip a CD.
    This could add hundreds of hours when ripping a large collection. Still, if you’re serious about your music, the goal should be to do it right the first time.

    Clean Metadata Must Be Embedded to Digital Files
    By applying the first four facts, audiophiles can create a near professional CD quality digital song. Yet, there’s another technical fact that can make or break your digital music experience.The information about a song or artist does not live on the CD itself, so it must be added from another source or manually typed into iTunes or Windows Media Player.

    Many music server manufacturers and commercial processing firms have partnered with large digital audio data companies to embed all the identifying attributes of the song: artist name, album name, song title, track number, music genre and even composer and conductor in the case of classical music.

    The embedded information gives digital music some unique advantages, especially when it comes to searching and organizing vast CD collections. A specific song on a specific CD can be instantaneously located and played, no matter how large the CD collection.If only part of the album name or a key word from a song is remembered, the correct song is still only a click or two away. Finally, a group of songs, all with common identification features, such as “Blue Grass,” can be strung together to create playlists, turning just about anyone into their own personal DJ.

    Just as metadata helps those of us with mild compulsive disorders keep our music organized and at our fingertips, dirty metadata can wreck havoc on a digital music library.Computers do what they are told and do not realize that “Dave Matthews Band,” “The Dave Matthews Band,” “DMB,” and “Dave Matthews,” are, in fact, the same artist and not four different bands.These types of errors peppered across a 1,000 CD music library undermine many of the benefits associated with digital metadata, which is why it’s so important to that your CD ripping software has access to a clean source of data.

    iTunes has partnerships with Gracenote and Muze for data and album art. Other programs may use a free database on the Internet, which will make metadata clean-up after the fact a lot of work.For Riptopia, even though we are partners with Gracenote and Muze, we still keep a team of music majors in house to manually groom data, which is especially important for classical and jazz collections.If you are grooming your collection yourself, you can use Windows Media Player/iTunes interfaces, or apply some useful programs, such as Tag and Rename.

    DRM May Limit the Use of Digital Music You Buy Online

    If you don’t have the time, knowledge or energy to rip your CD collection yourself, or you just want to buy some digital songs online, you can apply the first five facts to become a savvy online consumer.

    Check for CODEC, bitrates and quality metadata providers, such as Gracenote, Muze or AMG.Smart shoppers must be aware of Digital Rights Management (DRM). DRM is the term used to describe software strategies used by audio content providers to control how the digital content is used.

    Apple’s DRM, for instance, restricts the amount of times that a song purchased through iTunes can be copied or burned onto a CD. DRM also has the unseemly side effect of limiting playback to Apple products, such as the iPod.So, if you just installed a $100,000 multi-room custom digital audio solution into your home, that $.99 iTunes song will not play.

    Match Your Digital Components With Your New Digital Music Knowledge
    Like it or not, it’s imperative to read the technical specifications of a digital player to create high quality digital music that maximizes its performance.For instance, Escient Fireballs and Sonos systems can play the Lossless CODEC FLAC, which means that they can play back CD-quality digital music. Windows-based systems, such as Niveus and Lifeware, can handle WMA Lossless.

    If you are using a Sonance iPort dock or a Russound iPod dock within your home, the iPod should be loaded with ALAC to create CD-quality sound. For wireless systems, such as Control4, or complete home audio solutions, such as Netstreams, stick with MP3 320 to maximize both sound and buffer performance.Regardless of the hardware you choose, it is always a good idea to keep a master copy of your digital library on data DVDs or an external hard drive that is kept in a cool, dry place like a family safe.

    You never know when your drive will crash, you’ll lose your iPod or want to upgrade your digital hardware.In the digital age, content is king. A $100,000 custom home installation with Crestron components and B&W speakers will mean nothing if your digital audio quality is poor.

    This holds true not only for sound quality, but also for the information, or metadata, that is embedded in the digital files. If you are an audiophile who wants to rip your CDs into digital music, take into account these seven facts so you can maximize the quality of your new digital music library.For high-end AV integrators, applying these seven facts to custom installations will allow you to complete the job with not only the proper components, but also the proper data to optimize those components.


  • Digital Audio Interface 1.AC97 (Audio Codec 1997)
    AC'97 (short for Audio Codec '97; also MC'97, short for Modem Codec '97) is Intel Corporation's Audio "Codec" standard developed by the Intel Architecture Labs in 1997, and used mainly in motherboards, modems, and sound cards.
    Intel's use of the word audio codec refers to signals being encoded/decoded to/from analog audio from/to digital audio, thus actually a combined audio AD/DA-converter. This should not be confused with a codec in the sense of converting from one binary format to another, such as an audio (MP3) or video (Xvid) codec in a media player.
    Audio components integrated into chipsets consists of two components: an AC'97 digital controller (DC97), which is built into the I/O Controller Hub (ICH) of the chipset, and an AC'97 audio and modem codecs, which is the analog component of the architecture. AC'97 defines a high-quality, 16- or 20-bit audio architecture with surround sound support for the PC that is used in the majority of today's desktop platforms. AC'97 supports 96,000 samples/second in 20-bit stereo resolution and 48,000 samples/second in 20-bit stereo for multichannel recording and playback.
    Integrated audio is implemented with the AC'97 Codec on the motherboard, a Communications and Networking Riser (CNR) card, or an audio/modem riser (AMR) card.
    AC '97 v2.3[1] enables Plug and Play audio for the end user. This version provides parametric data about the analog device being used.
    In 2004 AC'97 was superseded by Intel High Definition Audio (HD Audio).

    2.Intel High Definition Audio
    Intel High Definition Audio (also called HD Audio or Azalia) refers to the specification released by Intel in 2004 for delivering high-definition audio that is capable of playing back more channels at higher quality than previous integrated audio codecs like AC97. During development it had the codename Azalia.
    Hardware based on Intel HD Audio specifications is capable of delivering 192-kHz 32-bit quality for two channels, and 96-kHz 32-bit for up to eight channels. However, as of 2008[update], most audio hardware manufacturers do not implement the full high-end specification, especially 32-bit sampling resolution.

    Microsoft Windows Vista and Windows XP SP3 include a Universal Audio Architecture (UAA) class driver which supports audio devices built to the HD Audio specification. Mac OS X has full support with its AppleHDA driver. Linux also supports Intel HDA controllers, as do the OpenSolaris, FreeBSD, NetBSD and OpenBSD operating systems.
    Like AC97, HD Audio is a specification that defines the architecture, link frame format, and programming interfaces used by the controller on the PCI bus and by the codec on the other side of the link. Implementations of the host controller are available from at least Intel, NVidia, and AMD. Codecs which can be used with such controllers are available from many companies, including Realtek, Conexant, and Analog Devices.

    3.ADAT (Alesis Digital Audio Tape) Interface
    Alesis Digital Audio Tape or ADAT, first introduced in 1991, was used for simultaneously recording eight tracks of digital audio at once, onto Super VHS magnetic tape - a tape format similar to that used by consumer VCRs. Greater numbers of audio tracks could be recorded by synchronizing several ADAT machines together. While this had been available in earlier machines, ADAT machines were the first to do so with sample-accurate timing - which in effect allowed a studio owner to purchase a 24-track tape machine eight tracks at a time. This capability and its comparatively low cost were largely responsible for the rise of project studios in the 1990s.
    "ADAT" is also used as an abbreviation for the ADAT Lightpipe protocol, which transfers 8 tracks in a single fiber optic cable. The ADAT cable standard is no longer strictly tied to ADAT tape machines, and is now utilized by analog-to-digital converters, input cards for digital audio workstations, effects machines, etc. One of the original benefits of utilizing ADAT versus S/PDIF or AES/EBU was that a single cable could carry up to eight channels of audio. (AES10 (MADI) can now carry up to 64 channels.)
    Several versions of the ADAT machine were produced. The original ADAT (also known as "Blackface") and the ADAT XT recorded 16 bits per sample (ADAT Type I). A later generation of machines - the XT-20, LX-20 and M-20 - supports 20 bits per sample (ADAT Type II). All ADAT's use the same high quality S-VHS tape media. Tapes formatted in the older Type I style can be read and written in the more modern machines, but not the other way around. Later generations record at two sample rates, the 44.1 kHz and 48 kHz rates commonplace in the audio industry, although the original Blackface could only do 48 kHz. Most (all?) models allow pitch control by varying the sample rate slightly (and tape speed at the same time).

    With locate points it was possible to store sample exact positions on tape, making it easy to find specific parts of recordings. Using Auto Play and Auto Record functions made it possible to drop in recording at exact points, rather than relying on human ability to drop in at the right place.
    ADAT's could be controlled externally with the Alesis LRC (Little Remote Control), which could be attached to the ADAT with a jack connector, and featured the transport controls and most commonly used functions. Alternatively the BRC (Big Remote Control) could be used, which included many more features which the stand alone ADAT did not have, such as song naming, more locate points and MIDI Time Code synchronisation.

    4.AES/EBU interface with XLR connectors

    The digital audio standard frequently called AES/EBU, officially known as AES3, is used for carrying digital audio signals between various devices. It was developed by the Audio Engineering Society (AES) and the European Broadcasting Union (EBU) and first published in 1985, later revised in 1992 and 2003. Both AES and EBU versions of the standard exist. Several different physical connectors are also defined as part of the overall group of standards. A related system, S/PDIF, was developed essentially as a consumer version of AES/EBU, using connectors more commonly found in the consumer market.

    5.AES47, Professional AES3 digital audio over Asynchronous Transfer Mode networks
    AES47 describes a standardised method of interconnecting digital audio over a telecommunication standard network.

    The development of standards for digitising analogue audio, as used to interconnect both professional and domestic equipment was started in the mid 1980s within the Audio Engineering Society and the European Broadcasting Union. This culminated in the publishing of the AES3 standard (frequently also known as AES/EBU) for professional use as well as, using different physical connections as specified in IEC 60958, within the domestic “Hi-Fi” environment. This work has provided the most commonly used method for digitally interconnecting audio equipment worldwide using physically separate cables for each stereo audio connection.
    Many professional audio systems are now combined with telecommunication and IT technologies to provide new functionality, flexibility and connectivity over both local and wide area networks. AES47 was developed to provide a standardised method of transporting the existing standard for digital audio (AES3) over current telecommunication interconnection standards that provide a quality of service required by many professional low latency, uncompressed live audio uses. It may be used directly between specialist audio devices or in combination with telecommunication and computer equipment with suitable network interfaces and utilises the same physical structured cable used as standard by those networks.

    AES47 (IEC 62365) is an open standard that specifies a method for packing AES3 professional digital audio streams over Asynchronous Transfer Mode networks. The details of these standards can be studied at the Audio Engineering Society standards web site by downloading copies of AES47-2006, AES-R4-2002 and AES3-2003. AES47 was originally published in 2002 and has been republished with minor revisions in February 2006. Amendment 1 to AES47 has been published in February 2009. This Amendment adds code points in the ATM Adaptation Layer Parameters Information Element to signal that the time to which each audio sample relates can be identified as specified in AES53. The change in thinking from traditional ATM network design is not to necessarily use ATM to pass IP traffic (apart from management traffic) but to use AES47 in parallel with standard Ethernet structures to deal with extremely high performance secure media streams. From work carried out at the British Broadcasting Corporation’s (BBC) R&D department and published as "White Paper 074", it has been established that this approach provides the necessary performance for professional media production. AES47 has been developed to allow the simultaneous transport and switched distribution of a large number of AES3 linear audio streams at different sample frequencies. AES47 can support any of the standard AES3 sample rates and word size. AES11 Annex D (the November 2005 printing or version of AES11-2003) shows an example method to provide isochronous timing relationships for distributed AES3 structures over asynchronous networks such as AES47 where reference signals may be locked to common timing sources such as GPS. AES53 specifies how timing markers within AES47 can be used to associate an absolute time stamp with individual audio samples as described in AES47 Amendment 1.
    An additional standard has been published by the Audio Engineering Society to extend AES3 digital audio carried as AES47 streams to enable this to be transported over standard physical Ethernet hardware. This additional standard is known as AES51-2006.

    6.I²S (Inter-IC sound) interface between Integrated circuits in consumer electronics
    I2S, or Inter-IC Sound, or Integrated Interchip Sound, is an electrical serial bus interface standard used for connecting digital audio devices together. It is most commonly used to carry PCM information between the CD transport and the DAC in a CD player. The I2S bus separates clock and data signals, resulting in a very low jitter connection. Jitter can cause distortion in a digital-to-analog converter. The bus consists of at least three lines:

    a. Bit clock line
    b. Word clock line (also called word select line)
    c. And at least one multiplexed data line

    You may also find the following lines:

    a. Master clock (typical 256 x bitclk)
    b. A multiplexed data line for upload

    I²S consists, as stated above, of a bit clock, a word select and the data line. The bit clock pulses once for each discrete bit of data on the data lines. The bit clock will operate at a frequency which is a multiple of the sample rate. The bit clock frequency multiplier depends on number of bits per channel, times the number of channels. So, for example, CD Audio with a sample frequency of 44.1kHz, with 32 bits of precision per (2) stereo channels will have a bit clock frequency of 2.8224MHz. The word select clock lets the device know whether channel 1 or channel 2 is currently being sent, since I²S allows two channels to be sent on the same data line. Transitions on the word select clock also serve as a start-of-word indicator. The Word clock line pulses once per Sample, so while the Bit clock runs at some multiple of the sample frequency, the word clock will always match the sample frequency. For a 2 channel (stereo) system, the word clock will be a square wave, with an equal number of Bit clock pulses clocking the data to each channel. In a Mono system, the word clock will pulse one bit clock length to signal the start of the next word, but will no longer be square, rather all Bit clocking transitions will occur with the word clock either high or low.

    Standard I²S data is sent from MSB to LSB, starting at the left edge of the word select clock, with one bit clock delay. This allows both the Transmitting and Receiving devices to not care what the audio precision of the remote device is. If the Transmitter is sending 32 bits per channel to a device with only 24 bits of internal precision, the Receiver may simply ignore the extra bits of precision by not storing the bits past the 24th bit. Likewise, if the Transmitter is sending 16 bits per channel to a Receiving device with 24 bits of precision, the receiver will simply Zero-fill the missing bits. This feature makes it possible to mix and match components of varying precision without reconfiguration.

    There are left justified I²S streams, where there is no bit clock delay and the data starts right on the edge of the word select clock, and there are also right justified I²S streams, where the data lines up with the right edge of the word select clock. These configurations however are not considered standard I²S.
    I²S signals can easily be transferred via Ethernet-spec connection hardware (8P8C plugs and jacks, and Cat-5e and above cabling).

    7.MADI (Multichannel Audio Digital Interface)
    Multichannel Audio Digital Interface, or MADI, is an industry-standard electronic communications protocol that defines the data format and electrical characteristics of an interface carrying multiple channels of digital audio. The AES standard for MADI is currently documented in AES10-2003. The MADI standard includes a bit-level description and has features in common with the two-channel format of AES3. Serial digital transmission over coaxial cable or fibre-optic lines of 28, 56, or 64 channels is supported, with sampling rates of up to 96 kHz and resolution of up to 24 bits per channel.
    MADI links use a transmission format that is similar to the FDDI networking technology (ISO 9314), which was popular in the mid-90's for backbone links between LAN segments. Since MADI is most often transmitted on copper links via 75 ohm coaxial cables, it is more closely related to the FDDI specification for copper-based links, called CDDI.

    The basic data rate is 100 Mbit/s of data using 4B5B to produce a 125 MHz physical baud rate. This clock is not synchronized to the audio sample rate, and the audio data payload is padded using "JK" sync symbols.

    The audio data is almost identical to the AES/EBU payload, although with more channels. Rather than letters, they are assigned numbers from 0–55 or 0–63. The only difference is that frame synchronization is provided by sync symbols outside the data itself, rather than an embedded preamble sequence, and the first 4 time slots of each subchannel are encoded as normal data, used for subchannel identification:

    * Bit 0: Set to 1 to mark channel 0, the first channel in each frame.
    * Bit 1: Set to 1 to indicate that this channel is active (contains interesting data).
    * Bit 2: notA/B channel marker, used to mark left (0) and right (1) channels. Generally, even channels are A and odd channels are B.
    * Bit 3: Set to 1 to mark the beginning of a 192-sample data block.

    Sync symbols may be inserted at any subframe boundary, and must occur at least once per frame (0.45% minimum overhead.)
    The original specification allowed 56 channels at sample rates from 28 to 54 kHz (32–48 kHz ±12.5%). This produced a total of 56×32×54 = 96768 kbit/s, leaving 3.232% of the channel for synchronization marks and transmit clock error.

    Finding that the wide tolerance on sample rates was little used, the 2003 revision specifies a sample rate range of 32–48 kHz, but allows 64 channels, a total of 64×32×48 = 98304 kbit/s, leaving 1.696% for synchronization marks and transmit clock error.There is also a double sample rate mode, with half the number of channels and twice as many frames per second.

    The original specification (AES10-1991) defined the MADI link as a 56 channel transport for the purpose of linking large-format mixing consoles to digital multi-track recording devices. Large broadcast studios adopted it for use routing multi-channel audio throughout their facilities as well. The 2003 revision, called AES10-2003, adds a 64 channel capability as well as support for "double-rate" sampling at 96 kHz by removing vari-speed operation.
    The latest AES10-2008 standard includes minor clarifications and updates to correspond to the current AES3 standard.
    MADI is widely used in the audio industry, especially in the professional sector. Its advantages over other audio digital interface protocols and standards such as AES/EBU (AES3), ADAT, TDIF and S/PDIF are: first, support of a greater number of channels per line; and second, the use of coaxial and optical fibre media that enable the transmission of audio signals over 100 meters and up to 3000 meters.

    8.MIDI (Musical Instrument Digital Interface)
    MIDI (Musical Instrument Digital Interface, pronounced /ˈmɪdi/) is an industry-standard protocol defined in 1982 that enables electronic musical instruments such as keyboard controllers, computers, and other electronic equipment to communicate, control, and synchronize with each other. MIDI allows computers, synthesizers, MIDI controllers, sound cards, samplers and drum machines to control one another, and to exchange system data (acting as a raw data encapsulation method for sysex commands). MIDI does not transmit an audio signal or media — it transmits "event messages" such as the pitch and intensity of musical notes to play, control signals for parameters such as volume, vibrato and panning, cues, and clock signals to set the tempo. As an electronic protocol, it is notable for its widespread adoption throughout the industry.
    Note names and MIDI note numbers.

    All MIDI compatible controllers, musical instruments, and MIDI-compatible software follow the same MIDI 1.0 specification, and thus interpret any given MIDI message the same way, and so can communicate with and understand each other. MIDI composition and arrangement takes advantage of MIDI 1.0 and General MIDI (GM) technology to allow musical data files to be shared among many different files due to some incompatibility with various electronic instruments by using a standard, portable set of commands and parameters. Because the music is simply data rather than recorded audio waveforms, the data size of the files is quite small by comparison.

    By the end of the 1970s, electronic musical devices were becoming increasingly common and affordable. However, devices from different manufacturers were generally not compatible with each other and could not be interconnected. Different interfacing models included analog control voltages at various standards (such as 1 volt per octave, or the logarithmic "hertz per volt"); analog clock, trigger and "gate" signals (both positive "V-trig" and negative "S-trig" varieties, between −15V to +15V); and proprietary digital interfaces such as Roland Corporation's DCB (digital control bus), the Oberheim system, and Yamaha's "keycode" system. In 1981, audio engineer and synthesizer designer Dave Smith of Sequential Circuits, Inc. proposed a digital standard for musical instruments in a paper for the Audio Engineering Society. The MIDI Specification 1.0 was published in August 1983.

    Since then, MIDI technology has been standardized and is maintained by the MIDI Manufacturers Association (MMA). All official MIDI standards are jointly developed and published by the MMA in Los Angeles, California, USA (http://www.midi.org), and for Japan, the MIDI Committee of the Association of Musical Electronic Industry (AMEI) in Tokyo (http://www.amei.or.jp). The primary reference for MIDI is The Complete MIDI 1.0 Detailed Specification, document version 96.1, available only from MMA in English, or from AMEI in Japanese. Though the MMA site formerly offered free downloads of all midi specifications, links to the basic and general detailed specs have been removed, ostensibly in the hope that visitors will buy their expensive printed documents. However, considerable ancillary material is available at no cost on the website.

    In the early 1980s, MIDI was a major factor in bringing an end to the "wall of synthesizers" phenomenon in progressive rock band concerts, when keyboard performers were often hidden behind huge banks of analog synthesizers and electric pianos. Following the advent of MIDI, many synthesizers were released in rack-mount versions, which meant that keyboardists could control many different instruments (e.g., synthesizers) from a single keyboard.

    In the 1980s, MIDI facilitated the development of hardware and computer-based sequencers, which can be used to record, edit and play back performances. In the years immediately after the 1983 ratification of the MIDI specification, MIDI interfaces were released for the Apple Macintosh, Commodore 64, Commodore Amiga and the PC-DOS platform, allowing for the development of a market for powerful, inexpensive, and now-widespread computer-based MIDI sequencers. The Atari ST came equipped with MIDI ports as standard, and was commonly used in recording studios for this reason. Synchronization of MIDI sequences is made possible by the use of MIDI timecode, an implementation of the SMPTE time code standard using MIDI messages, and MIDI timecode has become the standard for digital music synchronization.

    In 1991, the MIDI Show Control (MSC) protocol (in the Real Time System Exclusive subset) was ratified by the MIDI Manufacturers Association. The MSC protocol is an industry standard which allows all types of media control devices to talk with each other and with computers to perform show control functions in live and canned entertainment applications. Just like musical MIDI (above), MSC does not transmit the actual show media — it simply transmits digital data providing information such as the type, timing and numbering of technical cues called during a multimedia or live theatre performance.

    A number of music file formats have been based on the MIDI bytestream. These formats are very compact; a file as small as 10 KiB can produce a full minute of music or more due to the fact that the file stores instructions on how to recreate the sound based on synthesis with a MIDI synthesizer rather than an exact waveform to be reproduced. A MIDI synthesizer could be built into an operating system, sound card, embedded device (e.g. hardware-based synthesizer) or a software-based synthesizer. The file format stores information on what note to play and when, or other important information such as possible pitch-bend during the envelope of the note or the note's velocity.

    This is advantageous for applications such as mobile phone ringtones, and some video games; however, it may be a disadvantage to other applications in that the information is not able to guarantee an accurate waveform will be heard by the intended listener, because each MIDI synthesizer will have its own methods for producing the sound from the MIDI instructions provided. One example is that any MIDI file played back through the Microsoft MIDI Synthesizer (included in any Windows operating system) should sound the same or similar, but when the same MIDI bytestream is output to a synthesizer on a generic sound card or even a MIDI synthesizer on another operating system, the actual heard and rendered sound may vary. One sound card's synthesizer might not reproduce the exact sounds of another synthesizer.

    As such, MIDI-based mobile phone ring tones sound different on a handset than when previewed on a PC. In the same way, most modern software synthesizers can handle MIDI files but might render them completely differently from another synthesizer, especially since most modern software synthesizers such as a VST Instrument tend to allow the loading of different patches and the modification of these patches to create different sounds for each MIDI input. The term "MIDI sound" has gotten a poor reputation from some critics, which may be the result of the poor quality sound synthesis provided by many early sound cards, which relied on FM synthesis instead of wavetables to produce audio.

    The physical MIDI interface uses DIN 5/180° connectors. Opto-isolating connections are used, to prevent ground loops occurring among connected MIDI devices. Logically, MIDI is based on a ring network topology, with a transceiver inside each device. The transceivers physically and logically separate the input and output lines, meaning that MIDI messages received by a device in the network not intended for that device will be re-transmitted on the output line (MIDI-OUT). This introduces a delay, one that is long enough to become audible on larger MIDI rings.

    MIDI-THRU ports started to be added to MIDI-compatible equipment soon after the introduction of MIDI, in order to improve performance. The MIDI-THRU port avoids the aforementioned retransmission delay by linking the MIDI-THRU port to the MIDI-IN socket almost directly. The difference between the MIDI-OUT and MIDI-THRU ports is that data coming from the MIDI-OUT port has been generated on the device containing that port. Data that comes out of a device's MIDI-THRU port, however, is an exact duplicate of the data received at the MIDI-IN port.

    Such chaining together of instruments via MIDI-THRU ports is unnecessary with the use of MIDI "patch bay," "mult" or "Thru" modules consisting of a MIDI-IN connector and multiple MIDI-OUT connectors to which multiple instruments are connected. Some equipment has the ability to merge MIDI messages into one stream, but this is a specialized function and is not universal to all equipment. MIDI Thru Boxes clean up any skewing of MIDI data bits that might occur at the input stage. MIDI Merger boxes merge all MIDI messages appearing at either of its two inputs to its output, which allows a musician to plug in several MIDI controllers (e.g., two musical keyboards and a pedal keyboard) to a single synth voice device such as an EMU or Proteus.

    All MIDI compatible instruments have a built-in MIDI interface. Some computers' sound cards have a built-in MIDI Interface, whereas others require an external MIDI Interface which is connected to the computer via the newer D-subminiatureDA-15 game port, a USB connector or by FireWire or ethernet. MIDI connectors are defined by the MIDI interface standard. In the 2000s, as computer equipment increasingly used USB connectors, companies began making USB-to-MIDI audio interfaces which can transfer MIDI channels to USB-equipped computers. As well, due to the increasing use of computers for music-making and composition, some MIDI keyboard controllers were equipped with USB jacks, so that they can be plugged into computers that are running "software synths" or other music software.

    n popular parlance, piano-style musical keyboards are called "keyboards", regardless of their functions or type. Amongst MIDI enthusiasts, however, keyboards and other devices used to trigger musical sounds are called "controllers", because with most MIDI set-ups, the keyboard or other device does not make any sounds by itself. MIDI controllers need to be connected to a voice bank or sound module in order to produce musical tones or sounds; the keyboard or other device is "controlling" the voice bank or sound module by acting as a trigger. The most common MIDI controller is the piano-style keyboard, either with weighted or semi-weighted keys, or with unweighted synth-style keys. Keyboard-style MIDI controllers are sold with as few as 25 keys (2 octaves), with larger models such as 49 keys, 61 keys, or even the full 88 keys being available.

    MIDI controllers are also available in a range of other forms, such as electronic drum triggers; pedal keyboards that are played with the feet (e.g., with an organ); EWI wind controllers for performing saxophone-style music; and MIDI guitar synthesizer controllers. EWI, which stands for Electronic Wind Instrument, is designed for performers who want to play saxophone, clarinet, oboe, bassoon, and other wind instrument sounds with a synthesizer module. When wind instruments are played using a MIDI keyboard, it is hard to reproduce the expressive control found on wind instruments that can be generated with the wind pressure and embouchure. The EWI has an air-pressure level sensor and bite sensor in the mouthpiece, 13 touch sensors arrayed along the side of the controller, in a similar location to where sax keys are placed, and touch sensors for octaves and bends.

    Pad controllers are used by musicians and DJs who make music through use of sampled sounds or short samples of music. Pad controllers often have banks of assignable pads and assignable faders and knobs for transmitting MIDI data or changes; the better-quality models are velocity-sensitive. More rarely, some performers use more specialized MIDI controllers, such as triggers that are affixed to their clothing or stage items (e.g., magicians Penn and Teller's stage show). A MIDI footcontroller is pedalboard-style device with rows of footswitches that control banks of presets, MIDI program change commands and send MIDI note numbers (some also do MIDI merges). Another specialized type of controller is the drawbar controller; it is designed for Hammond organ players who have MIDI-equipped organ voice modules. The drawbar controller provides the keyboard player with many of the controls which are found on a vintage 1940s or 1950s Hammond organ, including harmonic drawbars, a rotating speaker speed control switch, vibrato and chorus knobs, and percussion and overdrive controls. As with all controllers, the drawbar controller does not produce any sounds by itself; it only controls a voice module or software sound device.

    While most controllers do not produce sounds, there are some exceptions. Some controller keyboards called "performance controllers" have MIDI-assignable keys, sliders, and knobs, which allow the controller to be used with a range of software synthesizers or voice modules; yet at the same time, the controller also has an internal voice module which supplies keyboard instrument sounds (piano, electric piano, clavichord), sampled or synthesized voices (strings, woodwinds), and Digital Signal Processing (distortion, compression, flanging, etc). These controller keyboards are designed to allow the performer to choose between the internal voices or external modules.

    All MIDI compatible controllers, musical instruments, and MIDI-compatible software follow the same MIDI 1.0 specification, and thus interpret any given MIDI message the same way, and so can communicate with and understand each other. For example, if a note is played on a MIDI controller, it will sound at the right pitch on any MIDI instrument whose MIDI In connector is connected to the controller's MIDI Out connector.

    When a musical performance is played on a MIDI instrument (or controller) it transmits MIDI channel messages from its MIDI Out connector. A typical MIDI channel message sequence corresponding to a key being struck and released on a keyboard is:

    1. The user presses the middle C key with a specific velocity (which is usually translated into the volume of the note but can also be used by the synthesiser to set characteristics of the timbre as well). The instrument sends one Note-On message.
    2. The user changes the pressure applied on the key while holding it down - a technique called Aftertouch (can be repeated, optional). The instrument sends one or more Aftertouch messages.
    3. The user releases the middle C key, again with the possibility of velocity of release controlling some parameters. The instrument sends one Note-Off message.

    Note-On, Aftertouch, and Note-Off are all channel messages. For the Note-On and Note-Off messages, the MIDI specification defines a number (from 0–127) for every possible note pitch (C, C♯, D etc.), and this number is included in the message.

    Other performance parameters can be transmitted with channel messages, too. For example, if the user turns the pitch wheel on the instrument, that gesture is transmitted over MIDI using a series of Pitch Bend messages (also a channel message). The musical instrument generates the messages autonomously; all the musician has to do is play the notes (or make some other gesture that produces MIDI messages). This consistent, automated abstraction of the musical gesture could be considered the core of the MIDI standard.

    MIDI composition and arrangement takes advantage of MIDI 1.0 and General MIDI (GM) technology to allow musical data files to be shared among various electronic instruments by using a standard, portable set of commands and parameters. Because the music is simply data rather than recorded audio waveforms, the data size of the files is quite small by comparison. Several computer programs allow manipulation of the musical data such that composing for an entire orchestra of synthesized instrument sounds is possible. The data can be saved as a Standard MIDI File (SMF), digitally distributed, and then reproduced by any computer or electronic instrument that also adheres to the same MIDI, GM, and SMF standards. There are many websites offering downloads of popular songs as well as classical music in SMF and GM form, and there are also websites where MIDI composers can share their works in that same format.

    Many people believe that the Standard MIDI File as a music distribution format used to be much more attractive to computer users before broadband internet became available to "the masses", due to its small file size. Also, the advent of high quality audio compression such as the MP3 format has decreased the relative size advantages of MIDI music to some degree, though MP3 is still much larger than SMF.

    S/PDIF specifies a Data Link Layer protocol and choice of Physical Layer specifications for carrying digital audio signals between devices and stereo components. The name stands for Sony/Philips Digital Interconnect Format (more commonly known as Sony Philips Digital InterFace), the two companies being the primary designers of the S/PDIF format. It is part of a larger collection of international standards known as IEC and defined by IEC 60958 (often referred to as AES/EBU), where it is known as IEC 60958 type II. S/PDIF is essentially a minor modification of the original AES/EBU standard for consumer use, providing small differences in the protocol and requiring less expensive hardware.

    A common use for the S/PDIF interface is to carry compressed digital audio as defined by the standard IEC 61937. This mode is used to connect the output of a DVD player to a home theater receiver that supports Dolby Digital or DTS surround sound. Another common use is to carry uncompressed digital audio from a CD player to a receiver. This specification also allows for the coupling of personal computer digital sound (if equipped) via optical or coax to Dolby or DTS capable receivers.
    S/PDIF was developed from a standard used in the professional audio field, known as AES/EBU which is commonly used to interconnect professional audio equipment. S/PDIF remained almost identical at the protocol level (consumer S/PDIF provides for copy-protection, whereas professional interfaces do not), but changed the physical connectors from XLR to either electrical coaxial cable (with RCA jacks) or optical fibre (TOSLINK, i.e., F05 or EIAJ Optical), both of which cost less. The cable was also changed from 110 Ω balanced twisted pair to the already far more common (and therefore compatible and inexpensive) 75 Ω coaxial cable, using RCA jacks instead of the BNC connector which is common in commercial applications. S/PDIF is, for all intents and purposes, a consumer version of the AES/EBU format.

    Note that there are no differences in the signals transmitted over optical or coaxial S/PDIF connectors—both carry exactly the same information. Selection of one over the other rests mainly on the availability of appropriate connectors on the chosen equipment and the preference and convenience of the user. Connections longer than 6 meters or so, or those requiring tight bends, should use coaxial cable, since the high light signal attenuation of TOSLINK cables limits its effective range. On the other hand, TOSLINK cables are not susceptible to ground loops and RF interference like coaxial cables.One deciding factor for many is cost—any standard 75 Ω A/V cable can be used for coaxial connectivity, while TOSLINK requires a specific cable which until recently was not very affordable.

    10.TASCAM Digital Interface
    The Tascam Digital Interface (TDIF) is a proprietary format connector defined by TASCAM that is unbalanced and uses a 25-pin D-sub cable to transmit and/or receive up to eight channels of digital audio between compatible devices. Unlike the ADAT lightpipe connection, TDIF uses a bidirectional connection, meaning that only one cable is required to connect the eight ins and outs of one device or another.

    The Initial specification available to implementers was called TDIF-1 Version 1.0.
    The first product with this connector was the TASCAM DA-88. That implementation did not include the ability to derive a word clock synchronization between the DA-88 and another TDIF-1 device, so a BNC WORD CLOCK connection was required as well.(cite: DA-88 users manual) Later TASCAM products included the ability to sync to the TDIF-1 connection, although that still excluded the DA-88. (cite:DA-38 users manual). Other manufacturers vary in their completeness of implementation.

    The signal labelled "word clock" in the TDIF-1 spec is delayed 270 degrees (90 degrees advanced) with respect to the word clock visible from the BNC word clock output. This is because the TDIF-1 spec was derived from the digital audio transmitter of the NEC uPD6381 DSP used in the DA-88.
    The TDIF-1 Version 1.1 specification includes parity and other channel information bits. TDIF-1 Version 2.0 includes specification for double speed and quad speed (e.g. 96kHz and 192kHz) rates at reduced channel counts.
    The TASCAM X-48 supports 96kHz at full channel count over 6 TDIF-1 connectors, using a post Version 2.0 specification.

    Bluetooth is an open wireless protocol for exchanging data over short distances from fixed and mobile devices, creating personal area networks (PANs). It was originally conceived as a wireless alternative to RS232 data cables. It can connect several devices, overcoming problems of synchronization.
    he word Bluetooth is an anglicized version of Old Norse Blátönn or Danish Blåtand, the name of the tenth-century king Harald I of Denmark, who united dissonant Danish tribes into a single kingdom. The implication is that Bluetooth does the same with communications protocols, uniting them into one universal standard.
    The Bluetooth logo is a bind rune merging the Germanic runes (Hagall) and (Berkanan).
    Bluetooth uses a radio technology called frequency-hopping spread spectrum, which chops up the data being sent and transmits chunks of it on up to 79 frequencies. In its basic mode, the modulation is Gaussian frequency-shift keying (GFSK). It can achieve a gross data rate of 1 Mb/s. Bluetooth provides a way to connect and exchange information between devices such as mobile phones, telephones, laptops, personal computers, printers, Global Positioning System (GPS) receivers, digital cameras, and video game consoles through a secure, globally unlicensed Industrial, Scientific and Medical (ISM) 2.4 GHz short-range radio frequency bandwidth. The Bluetooth specifications are developed and licensed by the Bluetooth Special Interest Group (SIG). The Bluetooth SIG consists of companies in the areas of telecommunication, computing, networking, and consumer electronics.Bluetooth is a standard and communications protocol primarily designed for low power consumption, with a short range (power-class-dependent: 1 meter, 10 meters, 100 meters) based on low-cost transceiver microchips in each device.Bluetooth makes it possible for these devices to communicate with each other when they are in range. Because the devices use a radio (broadcast) communications system, they do not have to be in line of sight of each other.


  • History of Digital Audio Commercial digital recording of classical and jazz music began in the early 1970s, pioneered by Japanese companies such as Denon, the BBC, and British record label Decca (who in the mid-70s developed digital audio recorders of their own design for mastering of their albums), although experimental recordings exist from the 1960s. The first 16-bit PCM recording in the United States was made by Thomas Stockham at the Santa Fe Opera in 1976 on a Soundstream recorder. In most cases there was no mixing stage involved; a stereo digital recording was made and used unaltered as the master tape for subsequent commercial release. These unmixed digital recordings are still described as DDD since the technology involved is purely digital. (Unmixed analogue recordings are likewise usually described as ADD to denote a single generation of analogue recording.)

    Although the first-ever digital recording of a non-classical music piece, Morrissey-Mullen's cover of the Rose Royce hit Love Don't Live Here Anymore (released 1979 as a vinyl EP) was recorded in 1978 at EMI's Abbey Road recording studios, the first entirely digitally recorded (DDD) popular music album was Ry Cooder's Bop Till You Drop, recorded in late 1978. It was unmixed, being recorded straight to a two-track 3M digital recorder in the studio. Many other top recording artists were early adherents of digital recording. Others, such as former Beatles producer George Martin, felt that the multitrack digital recording technology of the early 1980s had not reached the sophistication of analogue systems. Martin used digital mixing,[citation needed] however, to reduce the distortion and noise that an analogue master tape would introduce (thus ADD). An early example of an analogue recording that was digitally mixed is Fleetwood Mac's 1979 release Tusk.


  • What is Digital Audio
    Digital audio uses digital signals for sound reproduction. This includes analog-to-digital conversion, digital-to-analog conversion, storage, and transmission. In effect, the system commonly referred to as digital is in fact a discrete-time, discrete-level analog of a previous electrical analog. While modern systems can be quite subtle in their methods, the primary usefulness of a digital system is that, due to its discrete (in both time and amplitude) nature, signals can be corrected, once they are digital, without loss, and the digital signal can be reconstituted. The discreteness in both time and amplitude is key to this reconstitution, which is unavailable for a signal in which at least one of time or amplitude is continuous. While the hybrid systems (part discrete, part continuous) exist, they are no longer used for new modern systems.

    Digital audio has emerged because of its usefulness in the recording, manipulation, mass-production, and distribution of sound. Modern distribution of music across the internet through on-line stores depends on digital recording and digital compression algorithms. Distribution of audio as data files rather than as physical objects has significantly reduced costs of distribution.

    From the wax cylinder to the compact cassette, analogue audio music storage and reproduction have been based on the same principles upon which human hearing are based. In an analogue audio system, sounds begin as physical waveforms in the air, are transformed into an electrical representation of the waveform, via a transducer (for example, a microphone), and are stored or transmitted. To be re-created into sound, the process is reversed, through amplification and then conversion back into physical waveforms via a loudspeaker. Although its nature may change, its fundamental wave-like characteristics remain unchanged during its storage, transformation, duplication, and amplification. All analogue audio signals are susceptible to noise and distortion, due to the inherent noise present in electronic circuits. In other words, all distortion and noise in a digital signal are added at capture or processing, and no more is added in repeated copies, unless the entire signal is lost, while analog systems degrade at each step, with each copy, and in some media, with time, temperature, and magnetic or chemical issues.

    The digital audio chain begins when an analogue audio signal is first sampled, and then (for PCM, the usual form of digital audio) converted into binary signals — ‘on/off’ pulses — which are stored as binary electronic, magnetic, or optical signals, rather than as continuous time, continuous level electronic or electromechanical signals. This signal may then further encoded to combat any errors that might occur in the storage or transmission of the signal, however this encoding is for the purpose of error correction, and is not strictly part of the digital audio process. This "channel coding" is essential to the ability of broadcast or recorded digital system to avoid loss of bit accuracy. The discrete time and level of the binary signal allow a decoder to recreate the analogue signal upon replay. An example of a channel code is Eight to Fourteen Bit Modulation as used in the audio Compact Disc.



Featured Video