Introduction

Almost every home has a television today. It offers programmes from a number of available channels and is very simple to use. The Cable TV (CATV) makes it possible to choose programmes from large number of channels. Then became video rental business in combination with a video recorder, which provides customers to select movies when they will. This service may be called video on demand.

Nowadays Video-on-Demand (VoD) includes much wider services and opportunities. Today s technology allows telecommunication network operators to offer such services as home shopping, games, and movies on demand. These services should have a competitive pri ce comparing to the video rental, and customers do not need to travel for the services. These possibilit ies have been reached by the development of the telecommunication and electronic industry. The capacity of a hard disk has doubled almost every year at near-constant cost. The useful compression ratio for video has been increased considerably, MPEG-format ted video can be transported at a bit rate of few Mbit/s. The digital signal processing techniques permit the transport of a few Mbit/s over existing copper wires for a distance of a few kilometres. Finally, Asynchronous Transfer Mode (ATM) systems allow the switching of any reasonable bit rate to a single or multiple customers among a large number of connected customers. However, today s transmission bandwidth is large only downstream towards the customer with narrow upstream bandwidth. But upstream bandwidth will also become wider in the future, then interactivity between the customer and the service provider will increase.

This new technology is being developed all the time, because Video-on-Demand has so many different applications to offer to the custo mers and economical possibilities have been seen. Many companies, organisations and universities are developing products and standards. Both cable TV and telephone operators invest to their networks and have some trials in Video-on-Demand. To finance the required investments, higher consumer volumes must be reached from residential side instead of business side that is running ahead in the development of technology. The battle is hard, and it is getting harder all the time. So some companies have establis hed business relationships to get their knowledge and resources together. In addition, they may avoid some regulation restrictions before telecommunication markets are opened to everyone.

Interactive services

The current TV broadcasting will meet a fundamental change by interactive video delivery services. Many TV stations broadcast their programmes simultaneously to users, who select one channel of the available channels to view at a particular time. In contr ast, by an interactive system much wider selection of programmes become available at any time.

Types of interactive services

Based on the level of interactivity, interactive services can be classified into several categories. The following categories are collected from an article written by Thomas D.C. Little and Dinesh Venkatesh from Boston University [1].

PPV services are the easiest to implement, and T-VoD systems are the most difficult to implement. PPV and Q-VoD are services like watching movies. In these c ases, a local controller, set-top-box, can filter multiple channels to achieve the service. T-VoD requires a bi-directional signal from the user to a centralised controller.

Interactive services cover a wide range of services from movies-on-demand to distance learning. Some of the basic interactive multimedia services are listed below in Table 1.

Table 1. Interactive multimedia services.


Application                  Description

Movies-on-Demand             Customers can select and play
                             movies with full VCR capabilities
Interactive video games      Customer can play downloadable
                             computer games without having to
                             buy a physical copy of  the game.
Interactive news television  Newscasts tailored to custome
                             tastes with the ability to see
                             more detail on selected stories.
                             Interactive selection and
                             retrieval.
Catalogue browsing           Customer examine and purchase
                             commercial products.
Distance learning            Customers subscribe to courses
                             being taught at remote sites.
                             Students tailor courses to
                             individual preferences and
                             time constraints.
Interactive advertising      Customers respond to advertiser
                             surveys and are rewarded with
                             free services and samples.
Video conferencing           Customers can negotiate with
                             each other.
                             This service can integrate
                             audio, video, text,
                             and graphics.

System elements

A Video-on-Demand system has many elements that are necessary for the use of the complete service. This includes video servers, community network, switching office, set-top unit, and backbone network (see the figure 1). VoD system providers will offer ser vices which select the right technology, features, performance, price, reliability, and ease of use. Equipment are developed so that they will allow to operate in different environments and in a variety of services.

Figure 1: System elements.

The main VoD scenario consists of a local database and server connected to the user via a communications network. The data is stored on local distribution sites which are connected through high speed backbone network to information archives and video serv ers. This distr ibution scheme serves many purposes [1]. First, it is possible to implement it in a distributed fashion, increasing availability and reliability. Second, a provider can tailor the information delivery to the specific tastes of a user community in a partic ular geographic area, reducing costs. Third, it is easier to manage, as each local system is responsible for its own billing and accounting. Fourth, the system can be constructed in a regional, piecewise fashion.

Set-top unit

The user interacts with the services by the set-top unit in the subscriber premises. Along with the television and the remote control, it gives for the consumers opportunity to be connected to a video server and browse through a selection of movies or contents such as news or games. The key components of the set-top device are the line transceiver, demodulator, decompression unit, back-channel interface, remote control, and display driver.

The cost of the set-top unit must limit to reasonable price (few hundred dollars) for the VoD technology to succeed. Open and interoperable systems that let the users to subscribe to several different services are preferred.

Community network

The communications infrastructure between the customer premises and the local switching office is called the community or subscriber network. It connects the video server and the set-top device. A VoD system will require the transfer of huge volumes of d ata at very high speed. Many communication protocols and network architectures have been propose d to connect the various components. However, ATM is emerging as the most important technology. The interconnection includes both signalling and program data transfer, the latter in real-time, semipermanent and on-demand. ATM combines the advantages of pa cket and circuit switching schemes. But each access technology has its own service range, bandwidth, and environmental characteristics.

Fiber

Currently, fiber is used in trunk lines. Cable and telephone companies use feeder fibers to deliver informati on to the nodes, which are serving customers by coaxial or copper pair cables. Traffic in networks is expected to grow up very fast due to new services, and so fiber to the home (FTTH) and fiber to the curb (FTTC) systems, for example, are expected to tra nsmit wider bandwidths in the future. At the same time, fiber has other advantages. It has no active nodes in network to maintain and has the ability to be installed in restricted duct space. Furthermore, it maximises Just-on-Time provisioning and is a tr uly future proof network. But it is now too expensive to invest to fiber in a large scale. High number of customers is needed to share equipment costs, as in feeder lines.

ADSL

The Asymmetric Digital Subscriber Line (ADSL) system is an asymmetrical bi-directional transmission system used as the local subscriber loop between the local telephone switch and the subscriber s home, thus allowing the economical transmission of broadband services without signal regenerators.

In combination with the telephone signals, which may be analogue or digital (ISDN), control (16 and 24 kbit/s) and video (2 to 6 Mbit/s) information channel may be transmitted downstream towards the customer. In the upstream direction there are at least telephone and control channels, an d optional duplex bearers to 576 kbit/s.

Carrierless AM/PM (CAP), Discrete Multitone Transmission (DMT), and Frequency Division Multiplexing (FDM) are to be considered as the modulation techniques. ADSL systems are used as local subscriber loops with telep hone or basic ISDN access. ADSLs are not usable as Video on Demand (VoD) access networks when operating with PCM (Pulse Code Modulation) systems using multiplexing techniques in the subscriber loop, because in that case the lines are not physically switch ed.

ADSL uses relatively low bitrates and especially the return channel is quite narrow, which limits emerging new services. It is based on a star configuration using unshielded wire cables, two-wire line for each user.

Due to physical constraints, such a s a cable attenuation and frequency distortions, the ADSLs are limited in the distance which can be covered. The spanning distances related to the transmission bandwidth through a cable with diameter of 0.4 mm are the following.[2]

Twisted pair house network

New apartments already get optical cables to the basement switchboards. Standard twisted-pair specifications can then be used as the access technology to provide a direct ATM a ccess. This means that the plain old service is removed from low frequencies into ATM and then the existing telephone cable is used as a twisted-pair ATM connection.

On the other hand, network suppliers have to invest in some new devices. But more bandwidth is available, and customers perhaps could use the same terminal device in the POTS. If new high quality copper pairs are installed, bandwidths could be quite wide. According to ATM Forum specifications the following bitrates are available: 155 Mbit/s (UTP5) , 51 Mbit/s (UTP3) and 25 Mbit/s (IBM standard).

Cable television (CATV) network

The cable TV (CATV) distribution system is based on a tree-and-branch topology nowadays and on a star topology in the future. The audio and video signals are transmitted via coaxial cables in the subscriber line area. The trunk lines are usually made by fiber. High penetration of the CATV is targeted in cities and large communities. Due to the high bandwidth, it has many channels available, which are multiplexed onto the cable using Frequency Division Multiplexing (FDM). Since the number of channels available on the cable for services is limited, services that need a large bandwidth must be considered carefully.

Channel transmission on the cable is primarily unidirectional. Signals are inserted on the downstream channels by the so-called head end. Signals from customer sites are only allowed on certain upstream channels and they are only transmitted towards the h ead end. Although there is provision for upstream me ssage transmission, many cable systems do not have the actual amplifiers and filters that are needed. In addition, the problems of signal regeneration and noise are harder in the upstream direction as multiple noise sources are merged.

Cable systems are very vulnerable to physical damage, both from ageing and wilful destruction. Although the used cables are shielded coax, there are connections. Each of these is a potential leakage source, especially as the cable becomes worse. By agein g all these weak po ints are potential noise causing spots, causing noise ingress and leakage. If the power used in the cable is high in order to limit noise ingress effects, there is more leakage. Furthermore, one big minus for CATV connections is secrecy. If there is no AT M multiplexer in the centre of the CATV star or root of the tree, all data is going in the ATM 'bus' which is available to all subscribers. This means more costs, because we have to provide means to ensure privacy in the connections, and ATM does not provid e encryption. This has to be done by other means, maybe through encryption, but this encryption only takes place in the last hop.[2]

Wireless

Hybrid fiber and wireless distribution network could be used within the neighbourhood for reducing the installation and maintenance costs, not so much for mobility. Such a system could be useful in the area where the copper wires are in a bad condition or physical connection between a local distribution point and the customer residents is limited or costs too much, for example when the service provider owns a fiber infrastructure but not the copper plants to the homes.

This is a future technology, because there are several problems to be solved like the signal structure in the transition from fiber to wireless and a wireless reverse channel. Nevertheless, there are some trials for wireless TV distribution network. First , British Telecoms Millimetre-wave Multichannel Multipoint Video Distribution Service (MMDS) worked at 29 GHz [3], and second Cellular Visions commercial offering in the New York area under a U.S. FCC pioneer licence in the 27.5 to 29.5 GHz band [4].

Switching office

The switching office means both the telephone companys central office and the cable company s head-end. It is the place where services are fed and distributed to individual subscribers. It contains a head-end, video dial tone gateway, switches, and video servers. See the figure 1.

Head-end

In the head-end equipment, the video streams are formatted and organised to get th em into the community network. If ADSL is used, it switches the video streams onto the subscriber loops like it does today with telephone calls. When coaxial cable is used, the head-end is basically the same as CATV providers use today, except that the di gital channels need digital modulators.

VDT Gateway

The video dial tone (VDT) is an asymmetric switched video service in which the customer chooses among a wide selection of video material and receives real time response. Gateway is the entry point for an information provider to a carrier s VDT network. It should create and manage the connection between the information provider and the set-top device. The standardisation of the interfaces and functions of the VDT gateway is under way.

Servers

The video server is the network equipment providing the storage for video program material, which can be requested by the customers. It has to perform many functions, such as admission control, request handling, data retrieval, guaranteed stream transmiss ion, stream encryption, and support of functions found in VCRs including pause, rewind, and fast forward.

The video material can be stored on a combination of magnetic or magneto-optic disks, and magnetic tape devices. Different storage media offer different memory bandwidth for VoD services. The more popular movies are stored on RAM, the less popular ones on hard disk and the least popular on tertiary storage. Optical system like a CD-ROM and magnetic system like a tape drive will be possible media for inexpens ive tertiary storage for archival purposes. This kind of storage system reduces operating costs and can offer a wide selection of programmes to customers. In contrast, the downloading and caching of complete programme at user home has many disadvantages. First, communication bandwidth is often so limited that the user must wait for most of the information to arrive. This delay could be unacceptable. The customers equipment will cost too much, because caching an entire programme needs a large amount of mem ory. Finally, information providers do not accept that their programmes are available for data duplication and piracy. Table 2 gives the estimated costs for different storage device types. It has been assumed that a 90-minute movie requires about 1 Gbyte of storage using MPEG1 compression.

Table 2. Costs for different storage device types.

Storage type    Cost/Mbyte  Sessions/device Cost/movie stored

RAM                 50.00$       200             50,000$
Hard disk            0.50$         5                500$
R/W optical          0.20$         2                200$
Magnetic tape        0.01$         1                 10$

A metadata server [1] is a database system which manages the metadata information. It contains abstract information about the location and characteristics of the data to be retrieved. The user can look for the information summary and select the programme to get it to the home.

Backbone network

Outside of the local switching office, the backbone network connects it to the other video servers which are not in the local switching office and provide some national or specialised information. Currently, the high speed backbone network uses fiber cable and SDH-based transmission system. In the future ATM technology comes to the backbone network and then probably also to the community networks to simplify the interface requirements.

TUT network

Tampere University of Technology started ATM piloting in May 1993. The network was updated during autumn 1994 to the form shown in Appendix A. ATM is now a part of the operational TUT campus network which is connected to the ATM networks of the Finnish operators. The TUT ATM campus network is utilized in a number of national and European projects. As an example, Internet traffic between Tampere and Helsinki is carried over ATM. The Finnish University and Research Network FUNET is taking ATM widely into use during 1995.

The TUT ATM network utilizes mainly 155 Mbps optical interfaces. The physical terminal connections are mainly implemeted with OC3/STM1 multimode optical fibre but also some 100 Mbps TAXI interfaces are in use. ATM interface cards from Efficient Networks (ENI-155s-MF) and Fore Systems (SBA200) are utilized. These cards offer card-specific application programming interfaces providing AAL5, Classical IP over ATM (RFC1577), and raw ATM interface. LAN Emulation support will be available later.

Presently (January 1995), the network is PVC-based, but signalling (UNI3.0, Q.2931) will be taken into use as soon as it becomes available for the used equipment. SNMP-based network management is widely used in the network.

The ATM network connects a number of workstations including SPARCServer1000, SPARC20, SPARC10, DEC Alpha, and 486 EISA PC machines. The operating systems include SUN OS4.x, Solaris 2.x, and Windows.

A multimedia workstation network has been created by the Digital Media Institute. The network offers opportunity to study real multimedia traffic under real circumstances. Transmission protocols, applications, and services could be also developed in such a multimedia network environment. Furthermore, the system makes it possible for the students and researchers of the university to use distributed multimedia services in work, research, and supporting studies. Several applications and services can be supported by the server. The server can be utilized for application development, tests, and piloting.

The core of the server is SPARCServer1000 (TM) which provides file server functions, database, and computing services. This computer is based on multiprocessor technology with four processors (maximum eight) and with 256 MB of RAM memory. Server is capa ble of 2000 I/O operations per second. The operating system in the server is SOLARIS 2.3 (TM).

The server is supported with two fast SPARCStorageArrays (TM) capable of storing 30 GB of data in each array. The arrays are connected to server with high performance Fibre Channel. This Fibre Channel is a standards solutions according to ANSI Fibre Channel FC-PH Rev 4.2, X3T11-755D and ANSI SCSI Fibre Channel Protocol Rev 8, X3T10-993D. Sustained data rate is 15 MB per second, which is highly dependable on the configuration.

The server is connected through the campus ATM LAN network to the Finnish ATM network. ATM interface cards are from Efficient Networks (ENI-155s-MF), and they can provide 155 Mbps transfer rate. Several ATM cards can be connected in parallel to increase the throughput.

Presently (in January 1995) the Multimeda Server acts as a file server, but a straightforward solution to carry MPEG2 transport stream over AAL5 is under development. In the future, the server is developed in the direction of the standards for carrying audiovisual multimedia services over ATM evolving in ATM Forum and ITU-T.

The Digital Media Institute has a number of SPARC10 and SPARC20 workstations equipped with extensive range of multimedia devices and software (see Appendix B), connected with ATM to the multimedia server, for research and development activities.

TUT has experience from a variety of public-domain and commercial hardware and software components adapted to ATM. The tools include:

- Parallax video capture and compression board

- SunVideo board

- Mbone tools for videoconferencing over Internet

- Communique! videoconferencing system

- ShowMe videoconferencing system

- Bitfield BVCS Video Communication System

- Uniflix video-on-demand software

- Mosaic, World Wide Web

In the digital media pilot project, companies are going to join to the network and use different services. Working with different companies gives useful experiences and makes it possible to develop new services.

Other trials

There are many VoD trials currently under way. Many of these trials have not been created only for VoD, so they have been mixed with ATM or other broadband network trials. In different trials, different technologies, transmission media, equipment, customer cov ering areas, and so on are involved. Both cable TV and telephone companies have their own experiments, or they work together like a TCI/US West joint venture that offers both cable and telephony services. For example, several companies have commercial trials of interactive services; Time Warner has limited VoD to homes via cable in Orlando, Videoway has Quasi-VoD via cable in Montreal, Bell Atlantic has VoD using ADSL in northern Virginia, and NYNEX has VoD trials in Manhattan. Some projects are explained b elow without any particular reason to choose just them.

DIAMOND

Domestic Integrated Broadband Communications Applications Of Multimedia On Demand (DIAMOND) is one of the RACE projects. Its primary aim is to resolve some of the outstanding questions with respect to delivering multimedia services and producing a number of demonstrators which can be evaluated with respect to their technical implications, their economic viability, and their usage and usability in the hands of the potential consumers. T his primary target has been splitted to subsidiary objectives, and one of them is VoD system.

One of the field trials of VoD services is provided by Helsinki Telephone Company (HPY). The trial is concentrating on ADSL in the local loop. A local server is connected to ADSL transmission equipment, using a 2.048 Mbit/s bearer rate for a simplex downstream channel and 16 kbit/s for a duplex control channel. Telephone service (POTS) is offered to subscribers simultaneously with video transmission over the same subscriber line using low and high pass filters (POTS-splitters). At the customer premises, the ADSL equipment has interfaces for both a set-top-box and a telephone. The local server used in the field trial can also be accessed remotely via ATM-network (working association with RACE project MARS). Network evolution towards the FTTC concept is also discussed. The meaning of the trial is to investigate technical aspects of copper in the local loop, because there is need to deliver broadband services on existing local loop investment without complete replacement.

British Telecom

British Telecom (BT) is very interested in VoD, because it has no rights to offer broadcast type entertainment services over the country. BT is planning to use telephone lines fo r home shopping services already at the end of 1994. During a couple of years, the services will spred into bank services and interactive games.

The trial has begun in 1994 by connecting to the homes of 70 employees of BT. The unit working with television has copper or fiber transmission to the switch where it is connected to the programme server. The consumers can choose entertainment and information services by controlling their home unit by remote controller.[5]

Home Shopping Network

Home Shopping Network Inc. is a nation-wide video retailer for cable and broadcast television in USA. Headquartered in St. Petersburg in Florida, Home Shopping Network pioneered the concept of video retailing in 1982 and went national in 1985. Today the company is testing Video-on-Demand systems.

The system is used by telephone which takes connection to the Home Shoppings call centre. Instead of speaking with a live operator, more than half of the Home Shopping Clubs customers communicate directly with the companys Unisys mainframe through their push-button phones. The computer then passes along the order to Home Shoppings warehouses.

For movies, the computer would process and send the order to the customers local cable company by satellite or fiber optic lines. The cable company would then switch on the customers converter box to start the movie. If a customer wanted to pause the movie, the film can be delayed by five-minute increments with a button on the converter box. The system is a hybrid between Pay-per-View and Near Video-on-Demand. In addition to movies, the Home Shopping technology is designed to handle special Pay-per View events, such as boxing matches and concerts.

Bell Atlantic

On March 1993, the Federal Communications Commission granted Bell Atlantic-Virginia Inc. authority to begin a technical trial of asymmetric digital subscriber line (ADSL) technology for use in providing video dial tone over existing copper loop facilities. The technical trial, originally scheduled to end on March in 1994, was extended to September in 1994 and extended once again to continue until Bell Atlantic-VAs ADSL-based market trial begins, but no later than March 25 in 1995.

During a three months period, reported in ADSL technical trial phase two report, a total of 268 employees participated in the technical trial. The participants reactions to the services were tested and the technical feasibility of the service in a variety of network environments was assessed. The report shows steady video usage by participants, an average of 2.6 hours per week. Participants were very pleased with the quality and reliability of the service. Although some initial problems were encountered, these decreased due to improvements made in the equipment and experience gained by the technicians and the participants themselves. Participants request more features, more options, and a greater selection of movies. Nevertheless, they feel that the service is better than or comparable to other media (cable, VCR, broadcast) and is easy to use.

To show detailed results some examples is taken from the report. Set-top boxes were found to be the source for 22% of the trouble reports. Troubles were caused by software problems and faulty power supplies. The ADSL units (both in the Central Office and in participants homes) were responsible for another 37% of the troubles. These troubles were caused by faulty power amplifiers and capacitor modifications. In addition participants were asked to comment their satisfaction of the services. An overall satisfaction with VoD services had 6.9 average rate on scale of 1 to 10, where 1 means not at all satisfied and 10 means extremely satisfied. Some negative comments concerned the set-top boxes or there was too much dialling involved. Participants were also asked had they any negative effects on the quality or reliability of their voice/data service after installation of VoD service. There were very few problems concerned voice/data services and the majority of problems reported by participants were related to video usage.

Time Warner

In Orlando (Florida) the Full Service Network user trial began in December 1994. There are several companies behind the experiment including Time Warner Cable, Scientific-Atlanta, AT&T, SGI, Toshiba, and so on. The trial is full scale user experiment with 4000 subscribers by the end of the year 1995. Currently they have only few customers, and services provided are VoD, Home Shopping, and Games.

The concept is based on ATM to home with MPEG, and hybrid fiber coax transmission medium. The last mile is coax. The bitrate at customer site is 3.5 Mbit/s for video. From AT&Ts ATM switch is fed at 45 Mbits/s rate (DS3). In each node there are average 300 clients. At service provider site there will be 8 SGI Indigos servers and 16 SGI Vaults to provide total of 1.5 TeraBytes of disk capacity. The multiplexing is performed in ATM level using VPI/VCI and AAL5. This information is cell interleaved and error protected with Reed-Solomon and then modulated using 64 QAM. In addition, the data is encrypted.

The trial has just started, and services do not work without problems. For example, the remote control delay of 150 ms occurs a problem. However, the end-to-end delay of transmission is only 200 ms. The trial goes on and new solutions are researched all the time.

Standards

MPEG

Moving Pictures Expert Group (MPEG) is the name of the standard which has beenproduced by the ISO committee working on digital colour video and audio compression. MPEG defines a bit-stream representation for synchronised digital video and audio, compressed to fit into a bandwidth of 1.5 Mbit/s. This corresponds to the data retrieval speed from CD ROM and DAT, and a major application of MPEG is the storage of audio visual information on this media. MPEG is also gaining ground on the Internet as an interchange standard for video clips.

The MPEG standard has the three parts: video encoding, audio encoding, and systems. The system part includes information about the synchronisation of the audio and video streams. The video stream takes about 1.15 Mbit/s, and the remaining bandwidth is used by the audio and system data streams.

MPEG video encoding starts with a fairly low-resolution (352 x 240 pixels x 30 frames/s in the US; 252x 288 x 25 frames/s in Europe) video picture. RGB pixel information is converted to chrominance/luminance and a complex, lossy compression algorithm is applied. The algorithm takes the time axis as well as spatial axes into account, so a good compression ratio up to 200:1 is achieved when the picture is relatively unchanging. The compressed data contains three types of frames: I (intra) frames are coded as still images; P (predicted) frames are deltas from the most recent past I or P frame; and B (bi-directional) frames are interpolations between I and P frames. I frames are sent once every 10 or 12 frames. Reconstructing a B frame for display requires the preceding and following I and/or P frames, so these are sent out of time-order.

Substantial computing power is required to encode MPEG data in real time, perhaps several hundreds MIPS to encode 25 frames/s. Decoding is not so demanding. Finally, the quality of MPEG-encoded video is often metioned to be about the same as that of a VHS video recording.

The version 2 of the MPEG is under development and it will be completed until May 1995. The MPEG2 is designed to offer higher quality at a bandwidth of between 4 and 10 Mbit/s. This is too fast for playback from CD using todays technology. MPEG2 will compress, e.g., 720 x 480 full-motion video in broadcast television and video-on-demand applications. It has several advantages [6] comparing to MPEG1:

MPEG2 over ATM has several problems. For example, the transmission delay, delay jitter, and error correction are areas of intensive work today. ATM Forum SAA/AMS working groups phase 1 specification for MPEG2 over ATM for VoD service is evolving rapidly. It desires to use the ATM equipment which are available today. They want that the user needs to buy only the MPEG2 card, and so the problems are solved in the way the equipment manufacturers see it is necessary. First, the network is assumed to restrict the magnitude of the problems, and secondly, what remains is solved in the MPEG application.

This produces an obvious interoperability problem, when there are no clear specifications for the bitstream. The manufacturers could determine the price of the equipment when they sell complete systems to the users. The interoperability is optional in AMS phase 1, and it is achieved through supporting an ITU standard called H.222.1.

H.222.1 is in the ITU standards hierarchy for implementing the necessary functions needed to interface MPEG2 applications to the ATM network. It also specifies support for high-quality videoconferencing functionality, error correction, and other AALs (AAL1 and AAL2) in addition to AAL5. H.222.1 and other ITU recommendations related to MPEG2 transmission over ATM will be frozen in February 1995.

Furthermore, there are MPEG3 and MPEG4 under construction. MPEG3 will compress full-motion, HDTV-quality video. It has been thought that required data rate is 5 to 20 Mbit/s. But MPEG3 will be joined under MPEG2, because MPEG2 is flexible enough for such applications. MPEG4 is associated with narrowband channels, like mobile networks and POTS, which use small frame sizes, and require slow refreshing. Such applications will need a data rate of 9 to 40 kbit/s. For detailed information there are ISO/IEC 11172 specification for MPEG1 and ISO/IEC 13818 specification for MPEG2. [7]

JPEG

JPEG is a standardised image compression mechanism. JPEG stands for Joint Photographic Experts Group, the original name of the committee that wrote the standard. It is designed for compressing either full-colour (24bit) or grey-scale digital images of natural (real-world) scenes. It does not handle black and white (one bit/pixel) images, nor does it handle motion picture compression. Nevertheless, it could handle pictures one by one to the same information stream.

JPEG is lossy, meaning that the image you get out of decompression is not quite identical to what you originally put in. The algorithm achieves much of its compression by exploiting known limitation of the human eye, notably the fact that small colour details are not perceived as well as small details of light-and-dark. Thus, JPEG is intended for compressing images that will be looked at by humans. If you plan to machine-analyse your images, the small errors introduced by JPEG may well be a problem for you, even if they are invisible to the eye.

A useful property of JPEG is that the degree of lossiness can be varied by adjusting the compression parameters. This means that the image maker can trade off file size against output image quality. You can make extremely small files if you do not mind poor quality; this is useful for indexing image archives, making thumbnail views or icons etc. Conversely, if you are not happy with the output quality at the default compression setting, you can jack up the quality until you are satisfied, and accept lesser compression.

Although it handles colour files well, it is limited in handling black-and-white and files with sharp edges (files come out very large). The processing costs, even on up-to-date computers, is also high. For further information, contact independent JPEG group at jpeg-info@uunet.uu.net or ftp sites like ftp.uu.net:/graphics/jpeg.[7]

H.261

Recommendation H.261, commonly called px64, describes the video coding and decoding methods for the moving picture component of audio-visual services at the rate of p x 64 kbit/s, where p is in the range 1 to 30. It describes the video source coder, the video multiplex coder and the transmission coder.

This standard is intended for carrying video over ISDN in particular for face-to-face videophone applications and for videoconferencing. Videophone is less demanding of image quality, and can be achieved for p=1 or 2. For videoconferencing applications (where there are more than one person in the field of view) higher picture quality is required and p must be at least 6.

H.261 defines two picture formats: CIF (Common Intermediate Format) has 288 lines by 360 pixels/line of luminance information and 144 x 180 of chrominance information; and QCIF (Quarter Common Intermediate Format) which is 144 lines by 180 pixels/line of luminance and 72 x 90 of chrominance. The choice of CIF or QCIF depends on available channel capacity, e.g., QCIF is normally used if p<3.

The actual encoding algorithm is similar to, but incompatible with, that of MPEG. Another difference is that H.261 needs substantially less CPU power for real-time encoding than MPEG. The algorithm includes a mechanism which optimises bandwidth usage by trading picture quality against motion, so that a quickly-changing picture will have a lower quality than a relatively static picture. H.261 used in this way is thus a constant-bit-rate encoding rather than a constant-quality, variable-bit-rate encoding.[7]

MHEG

MHEG stands for the Multimedia and Hypermedia Information Coding Experts Group. This group is developing a standard Coded Representation of Multimedia and Hypermedia Information, commonly called MHEG. The standard is likely to be published in two parts, part one being object representations and part two being hyperlinking.

MHEG is suited to interactive hypermedia applications such as on-line textbooks and encyclopedia. It is also suited for many of the interactive multimedia applications currently available on CD-ROM. MHEG could for instance be used as the data structuring standard for a future home entertainment interactive multimedia appliance.

To address such markets, MHEG represents objects in a non-revisable form, and is therefore unsuitable as an input format for hypermedia authoring applications: its place is perhaps more as an output format for such tools. MHEG is thus not a multimedia document processing format, instead it provides rules for the structure of multimedia objects which permits the objects to be represented in a convenient form (e.g. video objects could be MPEG-encoded). It uses ASN.1 (abstract syntax notation 1) as a base syntax to represent object structure, but allows for the use of other syntax notations. In addition an SGML (Standard Generalised Mark-up Language) syntax is also specified.

There are four types of MHEG objects, which could be textual information, graphics, video, audio, etc.:

MHEG supports various synchronisation modes, for presenting output objects in these relationships.

Since the beginning, the MHEG partners have worked together in experimental programmes based on interchange and validation of test objects and pilot implementations. OMHEGA (Open MHEG Applications) was launched by the commission of the European Communities at the beginning of 1994 with the following objectives:

Other European projects began in January 1994, such as RACE/MARS (Multimedia Audiovisual Retrieval Service) and RACE/AMMIS (Advanced Man-Machine Interface for Selection of TV Programs), using MHEG objects in Video-on-Demand applications and multimedia TV guides for digital television broadcasting.

It seems that Part I of the MHEG specification may enter the international Standard (IS) stage by the end of 1994, Part II during 1995, and Part III by the end of 1995. Reference of the MHEG is T.170 | ISO. [8],[9]

DSM-CC

The Digital Storage Media Command and Control (DSM-CC) protocol is an application protocol intended to provide the control functions and operations specific to managing ISO/IEC 11172 (MPEG1) and ISO/IEC 13818 (MPEG2) bitstreams. An MPEG2 Systems (ISO/IEC 13818-1) Informative Annex provides a specification of the syntax and semantics for a simple environment of single-user-to-user-DSM applications. MPEG Systems are also deployed, however, in more diverse and heterogeneous network environments for many applications including Video-on-Demand and interactive video. The Working Draft of ISO/IEC 13818-6: MPEG-2 Digital Storage Media Command and Control Extension contains an extension of the DSM-CC protocol for supporting such applications in both stand-alone and heterogeneous network environments and is an integral part of the ISO/IEC 13818 (MPEG2) standards.

The draft provides the specifications for the controls of MPEG2 bitstreams in both standalone and distributed environments with the following characteristics:

Multi-server
DSM-CC clients may request service from multiple servers. The environment also contains servers communicating with other servers.

Multi-session
A DSM-CC client has the ability to have multiple simultaneous calls in progress.

Multi-client
A single piece of material may be accessed concurrently or sequentially by multiple clients.

Connectivity
Broadcast
Point-to-point
Multicast
Multi-point to multi-point

Multiprotocol
A DSM-CC client may request service of multiple servers, where each communication path may cross multiple diverse network protocols. These underlying network protocols must be transparent to the DSM-CC Extension.

The DSM-CC protocol provides access for general applications, MHEG applications, and scripting languages to primitives for establishing or deleting network connection using User-Network (U-N) Primitive and communication between a client and a server across a network using User-User (U-U) Primitive. U-U operations may use a Remote Procedure Call protocol. Both U-U and U-N operations may employ a message passing scheme which involves a sequence of bit-pattern exchanges.

DSM-CC may be carried as a stream within an MPEG1 System Stream, an MPEG2 Transport Stream, or an MPEG2 Program Stream. Alternatively, DSM-CC may be carried over other transports, such as TCP or UDP.

The Working Draft of ISO/IEC 13818-6 has been made in November 1994. ISO/IEC JTC1/SC29/Working Group 11 is working on DSM-CC and many issues are currently under study.

DAVIC

Rapid development in digital video and audio technology are creating isolated areas of market activity. Achieving critical mass of hardware creation and service development have led the market place to confusion and users will do wrong decisions. The purpose of DAVIC, the Digital Audio Visual Council, is to make easy the development of widely supported and enduring cross-industry specifications.

The First DAVIC Call for Proposals is an invitation to submit proposals for cross-industry and cross-national end-to-end interfaces and protocols and other technical and factual data that are relevant for a range of emerging digital audio-visual applications and services. The council invites proposals from content creators, packagers, programmers, server manufacturers, delivery media and service organisations, terminal device manufacturers, software vendors, integrators, governmental and regulatory organisations, standards bodies, research and academic organisations, industry consortia, and any individual or organisation whose interest covers digital audio-visual applications and services.

Although there is enthusiasm to deploy systems that support the widest possible range of services, the industry can be expected to grow as revenues from first services fuel continued developments. Therefore, the Call for Proposals concentrates on a subset of services and applications, i.e., Video-on-Demand and its derivatives, which are believed to be of immediate importance to DAVIC members. However, it has invited respondents to make concrete proposals on how other services can be supported as well.

DAVICs specifications aim to achieve the following:

One of the DAVIC core services that has greatly influenced the councils thinking is Video-on-Demand. Characteristics or functionalitys of the VoD Services have been identified:

The proposals have been considered after at the Tokyo meeting in 5-7 December 1994. A working reference model will be produced at the March 95 meeting, interoperability experiments will start in June and the first set of specification will be completed in December95. DAVIC is an open organisation and expressly encourages any corporate entity to become member. Inquiries on membership should be directed to Dr. Leonardo Chiariglione, e-mail leonardo.chiariglione@cselt.stet.it [10].

Products

It seems that VoD products are under development, but some devices and software are available already today. Due to many trials and continuing standardisation, there are many kind of products from different companies. For example, the video server market has become quite a mass of confusion. Servers generally can be used to store all kinds of data, audio, text, moving video, and stills. Most can do variable compression or no compression at all. They can differ with respect to the amount of data space, the overall bit rate of data that can leave or enter the server and the number of channels of data that can flow out or in at a particular time. Following companies have developed their own video servers; Alamar, Avid, BTS, Data General, DEC, Dynatech, Hewlett Packard, IBM, Micropolis, Odetics, Oracle, Quantel, Silicon Graphics, Sun Microsystems, and Tektronix. Other companies, like Apple and Microsoft, are racing to develop the graphical user interfaces for the VoD system. Furthermore, General Instruments, Silicon Graphics, and Nintendo are designing customer set-top boxes. It seems obvious that all this development leads us to the conclusion that interactive services come to the homes in the near future. The following paragraphs describe some hardware and software products, which are mainly used in workstations and related with business applications.

Hardware

SunVideo

The SunVideo board is a real-time video capture and compression subsystem designed for Sun SPARC station systems. The single-wide Sbus card captures, digitises, and compresses NTSC or PAL video signals from video cameras, VCRs, videodisc players, and other sources. Compressed digital video produced with the SunVideo board is then available to be transmitted over networks, stored on disk, or displayed by the host CPU in a window on the workstation screen. It supports multiple compression routines: Cell, JPEG, H.261, and MPEG1.

The SunVideo system is an essential enabling technology for implementing multimedia applications such as video conferencing, video mail, or compound document creation incorporating video clips for training or presentations.

This technology offers access to video as another data type to be manipulated by the computer for a host of applications. With a single card, video can be captured for use in presentations or as an element in a multimedia training or information system. Recipients of broadcasts of digital video for announcements or lectures do not need any special video hardware. Video conferencing applications supporting multiple simultaneous users are possible when the participants all have capture and compression capabilities, provided by the SunVideo board.[11]

XVideo

The XVideo produced by Parallax Graphics is the double-wide Sbus card which digitises, compresses and controls the screen in workstations of Sun-4 architecture. The card is available with different capabilities. It can digitise video signals of PAL-, NTSC- and SECAM-standard. A video compress equipment is based on the C-Cube CL550 compression circuit which can handle real-time video compression and uncompression by the JPEG method. Furthermore, it works like a accelerated true-colour screen driver. In practice, the card could be used in video conferencing, watching TV programmes, and storage of video sequences.

Perspective 2000

The Perspective 2000 is video server designed by the Vela Research Inc. It answers the need for a multimedia playback solution. The Perspectives 2000 provides a flexible, scaleable architecture that allows for a variety of configurations to fill an individual customers needs. Modular design offers expansion of equipment and services as business requirements change.

The server stores digitised video films like movies and commercials in a compressed digital format. The compressed movie data can be decompressed in real time to deliver an NTSC video output to support near Video-on-Demand, ad-insertion and other miscellaneous motion video playback applications.

The Perspective 2000 uses MPEG2 compression/expansion technology, but is not limited to the MPEG2 compression scheme. This digital technology allows each Perspective 2000 to output multiple NTSC playback channels. The MPEG2 Perspective 2000 decoder supports full decoding of MPEG2 ISO 13818 bitstreams.

The Site Manager system monitors and controls the video server from either a local or remote site. As a result, the status response from each server contains voltage levels, internal temperatures, fan speeds, diskarray status, movie playback status, buffer/decompression engine status, software and hardware revision levels and other related statistics. These data are stored in a database in the Site Manager.

Software

Uniflix

Uniflix is a software application toolkit that enables its users to transmit and receive high quality digital video and audio over a network. Uniflix is the industrys client-server video solution and can be used with either the SunVideo compression card from Sun Microsystems or the XVideo/PowerVideo cards from Parallax Graphics.

Uniflix provided client-server architecture allows clients on a network to remotely access compressed video streams generated by Uniflix video servers. The servers generate the streams using either of the video cards mentioned above. Uniflix servers and clients offer an integrated Video-on-Demand software solution specially in the industry.

The SunVideo based Uniflix server is capable of compressing and transmitting video data throughout a network at a data rate from 0.128 to 2.0 Mbits/s. The latter is sufficient to deliver 30 frames/s at one-quarter NTSC resolution.

The PowerVideo based Uniflix server is capable of compressing and transmitting video data at a data rate from 0.128 to 14.0 Mbits/s. The latter is sufficient to deliver 30 frames/s at full NTSC resolution and near-broadcast quality.

Digital video/audio can be compressed and stored in a Unix file using the Uniflix capture utility. The video can be captured at up to 30 frames/s at full NTSC resolution. The captured video can be edited using the Uniflix video editor and can be made available to clients on the network via a movie-on-demand video file server.

All servers and clients support four modes of operation: Video-on-Demand, continuous multicast, broadcast, or point to point transmission/reception of the video and audio data. Any network interface on the workstation supporting the TCP/IP protocol can be used to transport the data including Ethernet, ATM, FDDI, and others. Depending on the bandwidth capacity of your network, multiple servers (including live and file servers) can simultaneously be run on it.

StarWorks

StarWorks is a digital video networking software produced by Starlight Networks Inc. It provides a cost-effective, scaleable solution for enterprise video networking needs. With StarWorks software, many users can simultaneously share hundreds of hours of streaming video and audio data over local area networks (LANs) such as Ethernet and FDDI. StarWorks makes todays standalone video applications available on the LAN and provides the foundation for future advanced networked video applications, such as support for live video and integration with other application servers.

StarWorks client/server-based software delivers cost-effective performance for workgroup video applications. It provides reliable network delivery, and Video-on-Demand services, from a standard enterprise server and meets the needs of even high-bandwidth video. StarWorks for the Solaris software environment supports hundreds of simultaneous Windows, Macintosh, Solaris, and DOS users, who can view video applications while accessing other nonvideo applications, such as NFS servers and databases. StarWorks can even locate on the same server as other applications.

With StarWorks software, a SPARC-based server can handle over 100 gigabytes of digital video and audio storage and thousands of video and audio files. StarWorks server software is built on the Solaris computing environment to meet the real-time requirements of streaming data, yet fits into your current networking environment . It automatically maps the network topology and balances the network load. In addition, StarWorks guarantees video delivery using resource reservation and offers easy field maintenance, diagnostics, repair, and network management functions.

The StarWorks Solaris Client Development Pak provides an interface to help application and tool vendors develop total networked video solutions.

Communique!

Communique! software produced by InSoft is the workstation based product that integrates the multimedia aspects of graphics, audio, video, text and native application files into a virtual conference. We can have a meeting between desktops spanning our entire network. It supports SunVideo, Xvideo and PowerVideo cards using JPEG compression.

Communique! contains a suite of easily managed iconic tools for defining and initiating an on-line, real time conference with fellow workgroup members. And like the conference room, the Communique! Virtual Conference Room has the tools that help people easily exchange ideas and information. Audio Conferencing, Video Conferencing, a Shared White Board, and shared Text Tools are just some of the tools available to the conference members. In meeting we can use the tools necessary to get our point across to the participants. Since Communique! runs on our workstation, we can grab images and data from any of our workstation applications for discussion in our conference.

Handling a meeting, first the software creates connections by adding users of the software in the same local area network or users with address to the meeting. There can be up to ten participants in the same time. New users could be added or they could be removed during the meeting. Participants are shown in icons which include information of participants states; active, absent, or quitted state.[12]

ShowMe

ShowMe is also multimedia conferencing software. It creates a meeting, maintains a meeting, and has tools for multimedia conference. ShowMe works like Communique!, but requires SunVideo-card in Sun workstations. In addition, the number of participants of meeting is not limited.

The Sun corporation has also introduced new software, called ShowMe TV. It allows users to view multiple video broadcasts, record, or edit television programmes, or transmit broadcasts to select groups for training and other corporate communications. So ShowMe TV is the networked digital video broadcast product.

Designed for Sun SPARC computers running the Solaris (TM) environment and standard TCP/IP networks, ShowMe TV consists of two primary components; receiver and transmitter. The ShowMe TV Receiver allows the user to display, contol and record program material that is broadcast over the existing local area network. The receivers integrated digital VCR capabilities enable the capture of a segment or entire broadcast. The recorded program is stored on local or remote disks for archiving or editing. The ShowMe TV Transmitter broadcasts video and audio source material over the network to any computer running Solaris and ShowMe TV Receiver software. Multiple video and audio channels can be broadcast simultaneously to everyone on a network or to individuals. A broadcast scheduler is used to enter programme and scheduling information in a programme guide that can be accessed on the network. The ShowMe TV product uses efficient compression techniques to broadcast over the existing network without disrupting normal data flow or use of the network. Adding users to an existing broadcast channel does not impact the performance of the network.

Any SPARCstation can receive and display broadcasts with the installation of ShowMe TV Receiver software. Similarly, any SPARCstation can become a broadcast node by adding a SunVideo board, ShowMe TV Transmitter software, and a video source such as a VCR, camera, or tuner. The SowMe TV Transmitter requires Solaris 2.3 while the ShowMe TV Receiver works on Solaris 2.2 or later, as well as Solaris 1.x versions.

INTV!

InSoft Network Television, INTV!, is based on InSofts Digital Video Everywhere (DVE) architecture. INTV! allows video data to be distributed across the network to users. Any Sun workstation or server can be configured with standard television cabling as non-dedicated TV Station, and users do not need to have a video capture board installed in workstation in order to view INTV!. To the users, it looks like a standard TV remote control, simply point-and-click on the remote control interface to access INTV!. In conclusion, it broadcasts data across the network when it is requested by the user.

References

[1] Thomas D.C. Little and Dinesh Venkatesh, Prospects for interactive video-on-demand, IEEE Multimedia, Fall 1994

[2] Dring K.-H., Extension of public networks for VoD services, Deutche Telekom FTZ, 6.6.1994

[3] R.D. Carver, Millimetre-wave Radio for Broadband Local Access, ICC91, Conf. Record, vol.3, pp.1187-1190, Denver Colo., June 1991.

[4] Cellular Vision, 300 Park Ave., New York, NY 10022.

[5] Jukka Behm, TEKES Tekniikan Nkalat, 4/1994

[6] Tsokkinen Mikko, Multimedia ATM-verkossa, Diplomity, s. 21, 1994

[7] Borko Furht, Multimedia Systems: An Overview, IEEE Multimedia, Spring 1994

[8] Francoise Colaitis, Opening Up Multimedia Object Exchange with MHEG, IEEE Multimedia, Summer 1994

[9] http://cui_www.unige.ch/OSG/MultimediaInfo/mmsurvey/standards.html, List of Multimedia Standards

[10] The Digital Audio Visual Council, DAVICs First Call For Proposals, Version 1.0, Leidschendam, 14. October 1994

[11] SunVideo and DigitalVideo , Technical White Paper, A Sun Microsystem Inc.

[12] Communique! Users Guide, Version 3.2 for use with the OpenLook Window Manager, InSoft inc.


Last modified: Mon Mar 20 14:20:47 1995