DINTRODUCTION AND BACKGROUND / SuperStereo | Dynavector International Site

HOME > Products > SuperStereo > SuperStereo System

Why the Current Standards for Hi-Fidelity Reproduction are Inadequate and How Dynavector's SuperStereo System Addresses the Problem.


INTRODUCTION AND BACKGROUND


For more than 100 years since the time of Thomas Edison, the perfect reproduction of music has been one of the most sought after objects of music lovers. Great efforts are still being made to achieve this aim resulting in an endless debut of new products using modern technology based on digital techniques. But are these efforts on the right lines?

The long accepted criterion for hi-fi sound reproduction is the realisation of an ideal transfer function of one. (The transfer function is the most basic mathematical expression to describe the dynamic relationship between the input and output of a certain dynamic system, for example, amplifiers and loudspeakers). Transfer function one means a perfect response to the input signal in that dynamic system. To express it another way, a perfect transfer function means that the frequency response of the system is entirely flat from 0 to infinite Hz with no phase shift throughout the entire frequency range.

In a system having transfer function one, all information in the input signal can be transferred to the output signal without error. (This criterion is also an ideal condition for telecommunications systems, either analogue or digital.) For the last 100 years, all those engaged in audio development were convinced by the transfer function one concept. Thanks to the remarkable advances of digital technology, the performance of high-end equipment is today considered to be nearly perfect. We can conclude that the objective of meeting transfer function one has already been achieved as far as the technology is concerned.

Success in meeting transfer function one means that any sound or any music recorded on any modern media through microphones can be played back perfectly. But, however highly digital sound technology is acclaimed, the gap between live music and its playback is still very large, except in rare instances. This is the fundamental reason for the present poor state of audio business. Music lovers and musicians have lost enthusiasm or faith in the music which is played back through existing stereo systems. And, however much they upgrade, they still cannot get a life-like sound from their hi-fi systems. This unfortunate situation is mainly caused by the many years of devotion to the transfer function one criterion which was believed to be the final goal of hi-fi technology.

Is This Criterion Valid or Invalid?

Usually, almost all criteria for stereo reproduction are based on the definition that the sound wave propagation velocity is 340m/sec. This is accepted as an absolute truth and the room acoustics are explained as a result of the three dimensional wall reflection of sound having a constant wave velocity. In other words, the acoustic character of an individual room is considered to be the result of a very complex mixture of reflections. In this case too, however complex the echoes are, the sound velocity has been considered constant by the audio engineering world until the present time. Multi-channel surround sound stereo is the typical example which is designed according to this concept. We call it geometrical acoustics.

From the time the audio engineering was launched until today, the design principle for audio equipment, as well as for room acoustics, is still based on the common concept that sound velocity is always constant. These concepts are based on Fourier and Laplace. In this concept, all of the dynamic characteristics of an audio system can be analysed and designed by observing the system's response to a sinusoidal input the frequency of which covers the required frequency range. In other words, the most common characteristics of audio equipment as the frequency response, phase response and impulse response can be explained theoretically. As far as dynamic systems, including audio systems are concerned, the time response and frequency response of the system has the same meaning. That is, if the time response of a system is observed, the frequency response of the system can be calculated. More simply, the time response and frequency response are the same way of describing the system's performance. A system following this particular rule is called as the "Minimum Phase System". This is the most important and standard theorem common to all vibratory phenomenon, including audio systems and electric signal transmission.

As described before, a system having transfer function one has a flat frequency and phase response. This knowledge, based on Fourier's theory, produced the common agreement that the transfer function one is the absolute criterion for high fidelity stereo sound reproduction and it has remained unchallenged until today. This is accepted as the ideal yardstick for designing and evaluating loudspeakers, audio amplifiers, recorders, pick-up and playback transducers, microphones, phono-cartridges, and transmission lines. Even the acoustic design of a listening room, which does not belong to a minimum phase system, is seriously influenced by this yardstick which is being used by many architects.

Thanks to the great advancement of electronics, computer technology and materials, almost all audio equipment is acclaimed for meeting this criterion with virtually perfect frequency and phase response. It is not difficult any more to get a flat response in a listening room by selecting good quality audio equipment. But are typical buyers of such systems enjoying life-like music play-back in their listening room? At the moment, the situation regarding the replay of music is not as the hi-fi criterion suggests. Many customers are being deceived by this criterion although they may spend a great deal of money. This is causing many customers to lose interest in listening to stereo.

Why Is This?

Dynavector has been studying this contradiction for 10 years. But finally, and fortunately, we have come to the conclusion that this accepted hi-fi criterion can be shown to be valid only under special conditions, such as in an entirely open space or in an anechoic room.But the recording and play-back of music must, of course, occur where the sound is confined within a limited space as in a concert hall, studio, or a domestic listening room.

In these typical conditions with which we are familiar, more rigorous and precise investigations about the nature of sound itself are essential. From the study of advanced physics, we conclude that the beat of sound needs more examination and investigation for music played in an enclosed environment such as a hall.

Textbooks of physics say that if we consider the problem of the propagation of music signal waves in a media, it is necessary to understand that the propagation velocity of a wave packet is not always constant, according to the characteristics of the media. By wave packet we mean the composite of waves for several frequencies which exist in the music signal. We accept as given that sound velocity is always 340m/sec. Since Thomas Edison until now, all acoustics' applications theory for designing audio equipment has been based on this rule.

But this rule needs to be reconsidered from a more rigorous scientific standpoint. Now when sound velocity is calculated from a wave equation, or measured by an experiment, a sound of a certain single frequency is usually assumed. But in real music, many frequency signals exist at the same time and are propagating under one form of a packet. In actual music, we need to investigate the basic character of the wave packet propagation, not a sound of a single frequency.

Where actual music is concerned, a wave packet consists of an almost infinite number of frequencies. Many fundamental frequency components in music have individually attendant frequency components, the frequency of which slightly differs from the fundamental. This means that beats are generated. The frequency of a beat is the frequency difference of two sound components. Usually, the propagation velocity of waveform of a beat is considered to be identical to the sound velocity of 340m/sec.

Advanced texts on the subject explain the character of beats by mathematical expression. A beat does not always propagate from the sound velocity. The wave packet changes its make-up as a function of time elapsing as long as the environment is a closed space. The propagation problem of complex waveforms such as music waveforms needs precise classification. We need to say whether the sound is in a room or in an open space. In each case the propagation velocity depends on the shape of the beats.

The reference texts on the subject say that sound propagation in non-dispersive media has a constant velocity throughout the entire frequency range. On the contrary, however, the propagation velocity in dispersive media needs more careful treatment from the theoretical point of view which is difficult to understand from a knowledge of traditional audio engineering. It can be shown mathematically that in a dispersive environment, the waveform of a beat (wave packet) transforms constantly and travels with a different velocity from normal velocity of 340m/sec. Usually in a sample of music, each principal frequency of a musical instrument has many side frequency components of which the magnitude and frequency deviation from each principal frequency is small. This means that in the vicinity of the principal frequency, many beats are always shadowing. In other words, the complex sound waves of real music are followed by the shadows caused by the side frequencies.

If a music wave travels in a non-dispersive media such as open air or an anechoic room, the principal waves and shadows arrive at the same time. But in more usual situations, such as in a concert hall, the environment of the hall is shown to be a dispersive media by actual measurement or theoretical calculation. Accordingly, the beats (shadows) arrive a moment after the principals arrival time. This time misalignment of the principal sound component and its shadows make the sound of live music in a concert hall more colourful and enjoyable.

To summarise the above using more scientific terminology we can say :

The waveform for beat is similar to a sinusoidal wave modulated by a lower frequency signal. When the modulated signal travels in a non-dispersive media, the modulating signal travels with the same velocity of the modulated signal. That is to say, the information in the modulating signal will arrive at the same time as the modulated signal (carrier). In this case, we say there is no group delay if the media environment is non-dispersive. Or we say that the group velocity and phase velocity are the same. The propagation velocity of the modulated signal (carrier) is phase velocity. The propagating velocity of the modulating signal (the shape of the beat envelope) is the group velocity.

When a sound wave travels in a dispersive media, group velocity (envelope) and phase velocity (carrier) are not identical. This phenomenon is basically the same as that of a microwave guide tube in radar.

In the case of audio frequencies, the situation is very similar. By measuring or calculating the frequency response of a room, we recognise that large numbers of group delays are distributed throughout the frequency range. In a frequency range where a group delay occurs, the character of the space is dispersive in that frequency range. From this, we can recognise that when the room acoustic is considered, the acoustic character of the room has both that of dispersive and non-dispersive media. As far as room acoustics are concerned, the dispersive and non-dispersive character occurs alternately in the frequency response of the room. The evidence for this dispersion is found by measurement using a FFT analyser (Fast Fourier Transform). The density and distribution of group delays are dependent on the room conditions and listening position. Our researches show that the more complex group delays provide the most enjoyable conditions for listening to music.

We can conclude the above theoretical discussion by saying that, in an actual concert, the principal information in music is followed by dense and complex shadows. These are caused by the dense and complex distribution of dispersive regions in the frequency response of the hall acoustics. And the shadows exhibit delay in following the principal information of the original music in the enclosed space of the concert hall or recording studio. The delayed shadows, occurring throughout the entire frequency range, produce sound that is more enjoyable than when music is heard out of doors in an open space, or under anechoic conditions. The presence of beats in an enclosed environment ensures that a more natural, and hence more attractive, sound is heard.

Today, many new types of musical instruments are used. But the big difference that separates them from natural instruments is readily apparent from the sounds they produce which are uniform and lacking in subtlety. This is due to the fact that the sound of an electronic synthesiser, for example, consists of the principal tone generated by a sinusoidal function and, unfortunately, the density of the side frequency is very poor or non-existent compared to natural instruments. This means that the beats of electronic instruments are simple and few. The music played on electronic instruments is overwhelmed by the principal sound, even in a very dispersive environment. That is to say, the music has great power but lacks very much the attractive natural tonal colours which are so much a part of classical music.

What is Hi-Fi ?

Usually, audiophiles believe that high fidelity music reproduction can be achieved if all the components of a hi-fi system have technically perfect data for frequency and phase response and little or no distortion. In other words, hi-fi reproduction is near perfect if the musical signals taken by the best microphones are replayed through a system having such measured characteristics. But is this correct? Not in Dynavector's view for reasons which we shall now explain.

Following the analysis given above of the nature of sound in a concert hall or other enclosed space, we now wish to explain a totally new concept in audio history which we think marks a breakthrough in the understanding and achievement of the life-like reproduction of recorded music. We go so far as to say that, without an understanding of this concept, real advances in the playback of music will not be possible and digital high-technology will simply accelerate the confusion in the audio world. That this is so is very evident from multi-channel surround systems existing today which completely ignore this concept.

This concept then is the missing link between actual music and its replay by hi-fi stereo. For, however advanced the audio technology is, the problem has stayed unsolved by the audio industry and remains as the industry's most significant hurdle. The reasons for this difficulty can be understood by a careful reading of this paper.

First, let us ask what is the sound developed by a loudspeaker? Can a loudspeaker recognise and develop separately the phase velocity and the group velocity from the input signals?

As already mentioned, a concert hall is full of sounds characterised by phase and group velocity, which are not the same. The principal information in music is always shadowed, more or less, by the beat having group velocity. The character of the sound of music played in a concert hall is very much dependent on the state of the shadows at the listening point. Therefore, to enjoy the playback of music in a listening room from CD's or LP's as if one were at the original concert, the group and phase velocity should be reproduced separately.

To put it more technically, the dispersive condition of the original hall must be recovered in the reproduced sound when playing back recorded music. It is thus clear that an essential requirement of true high fidelity is an accurate simulation of the original dispersive condition in the listening room in addition to meeting the current criterion of audio technology.

How to Obtain a Dispersive Environment in the Listening Room

A microphone collects the sound of music, consisting of group and phase velocity components, as a single electrical signal. On replay the signal is amplified by an amplifier. A loudspeaker is then driven by this amplified electrical signal.

The diaphragm of the loudspeaker vibrates according to the signal. Finally, sound similar to the electric signal is radiated into the listening room. At this moment, has the sound from the speaker the same character as the original sound?

The sound from the loudspeaker radiates with a constant velocity of 340m/sec. This means that the direct sound of the loudspeaker is entirely non-dispersive sound; or the character of the group velocity in the original sound disappears from the speaker's sound at this stage. Listeners in a listening room can listen to a sound having a similar waveform to that of the microphone. The group velocity cannot exist in the listening room. The basic character of the sound in the listening room is almost non-dispersive. This is the standard condition for listening to stereo today and we hope that you will now appreciate why there is a missing link between the original sound and its playback.

Dynavector's Conclusion

For true hi-fi music reproduction, traditional hi-fi technology which pursues only accurate signal and replay by means of advanced digital techniques is far from the truth. The recovery of the dispersive elements contained in the original sound in a listening room is now recognised as the important factor. Conventional 2 speakers stereo does not allow for this, neither do the multi-channel systems on offer.

Dynavector has concluded that the existing 2 speaker's stereo needs at least one more pair of speakers to develop the required dispersive conditions in the listening room. After ten years of research work, we have succeeded in producing the answer to this problem by developing a SuperStereo system.

SuperStereo System

As previously mentioned, a loudspeaker radiates sounds which are non-dispersive propagation waves. It is not possible for the loudspeaker itself to discriminate between non-dispersive or dispersive sound. To discriminate between the non-dispersive and dispersive components in sound directly radiated from the diaphragm of a speaker, the environment surrounding the speaker needs to be identical to the original environment. Perfect simulation of the original recording environment is theoretically not possible. But in our many trials over several years, we have found that even an approximate simulation can produce a life-like reality from the playback of recordings much closer to the original sound.

After much research, Dynavector has now perfected a SuperStereo processor which produces numerous group delays throughout the entire frequency range. The sound so produced is played through additional speakers. The additional speakers are located so as to face the main, front speakers. The interaction of the sound between the front and rear speakers produces significant group delays in the listening room. While the group delays are not identical to those in the original listening or recording environment, the feeling of life-like reality is greatly enhanced by the use of such a system for the playback of CD's, cassettes and LP's. Even by applying the SuperStereo technique to a limited frequency range, the improvement is striking, but there are several grades of SuperStereo processors to cater for all users from the basic music lover to the most enthusiastic audiophile.

Outstanding Features of SuperStereo

1. Any normal stereo system, whether modest or top end, can be used with a Dynavector SuperStereo processor without any change of equipment or speaker setting, and connection is straightforward.

2. Normal, compact speakers are recommended for the rear pair.

3. SuperStereo works very well in any environment. Room size and acoustics are unimportant and room-tuning devices are not needed. So SuperStereo does not interfere with room decor.

4. The listening point is not limited to a centre "hot spot". Almost any position, except near to any speaker, is enjoyable.

5. The playback of music can be enjoyed with a lifelike reality not achievable with a conventional stereo system, however expensive.

6. Poorly recorded material sounds much better with SuperStereo and monaural LP's can sound even better than modern stereo recordings. Many historic recordings by great masters of the past are revived to sound like the original concert performances.

7. Even with very small systems, by using a SuperStero processor, deep bass can easily be reproduced and dynamic range, smoothness and detailed resolution are much higher than in conventional stereo systems. The purchase of expensive equipment is not necessary to enjoy the real sound of music. And there is no need to worry about obsolescence from new digital "high tech" which constantly appears on the market.

8. The ultimate SuperStereo can be achieved by a step by step add on of further processors without great expense. Expensive speakers and amplifiers are not needed.

9. SuperStereo has many applications in audio including PC music, in car entertainment systems, home theatre, professional sound systems for theatres, PA systems, synthesisers and electronic instruments, the tailoring of studio acoustics and medical therapy systems. Patents have been granted world-wide.

Final Conclusion

The pre-occupation of audio engineers in recent decades in meeting specifications for hi-fi hardware which have little relevance to the reproduction of life-like sound is increasingly recognised today. But existing audio technology is still failing to satisfy listeners who wish to enjoy the great cultural heritage of musical achievement preserved on analogue recordings since the time of Thomas Edison.

Today, Dynavector SuperStereo is the only system which starts out with the clear aim of ensuring that recorded music can be played back as musically complete as possible. While digital engineers can reproduce the sound waves of music, as far as the actual sound of music is concerned, the problems are much more complex when listeners want life-like reality from their playback systems. For this reason, a multi-discipline approach to the development of hi-fi hardware is required involving both musicians and advanced scientists.

The audio business now has two main parts: one is the pursuit of digital sound following new media. This area is growing rapidly into a large industry. The truthful reproduction of the old media is completely ignored by this section.

The second is the smaller area which is concerned only with traditional hi-fi stereo. In this part, due to the poor cost-performance ratio of stereo systems, many enthusiastic consumers are staying out of the market. The result is, unfortunately, that the hi-fi business is showing serious signs of decline. This is due to the failures in audio engineering which has focussed too exclusively on the existing criterion for hi-fi referred to at the beginning of this paper. This criterion should be reconsidered and changed, otherwise we face the prospect sooner or later of few people having any interest in stereo music. Only one system is able to remedy this lamentable state of affairs, the Dynavector SuperStereo System.

Dynavector Systems Ltd.
12 January 2001