Almost every home has a television today. It offers programmes from a number of available channels and is very simple to use. The Cable TV (CATV) makes it possible to choose programmes from large number of channels. Then became video rental business in combination with a video recorder, which provides customers to select movies when they will. This service may be called video on demand.
Nowadays Video-on-Demand (VoD) includes much wider services and opportunities. Today s technology allows telecommunication network operators to offer such services as home shopping, games, and movies on demand. These services should have a competitive pri ce comparing to the video rental, and customers do not need to travel for the services. These possibilit ies have been reached by the development of the telecommunication and electronic industry. The capacity of a hard disk has doubled almost every year at near-constant cost. The useful compression ratio for video has been increased considerably, MPEG-format ted video can be transported at a bit rate of few Mbit/s. The digital signal processing techniques permit the transport of a few Mbit/s over existing copper wires for a distance of a few kilometres. Finally, Asynchronous Transfer Mode (ATM) systems allow the switching of any reasonable bit rate to a single or multiple customers among a large number of connected customers. However, today s transmission bandwidth is large only downstream towards the customer with narrow upstream bandwidth. But upstream bandwidth will also become wider in the future, then interactivity between the customer and the service provider will increase.
This new technology is being developed all the time, because Video-on-Demand has so many different applications to offer to the custo mers and economical possibilities have been seen. Many companies, organisations and universities are developing products and standards. Both cable TV and telephone operators invest to their networks and have some trials in Video-on-Demand. To finance the required investments, higher consumer volumes must be reached from residential side instead of business side that is running ahead in the development of technology. The battle is hard, and it is getting harder all the time. So some companies have establis hed business relationships to get their knowledge and resources together. In addition, they may avoid some regulation restrictions before telecommunication markets are opened to everyone.
PPV services are the easiest to implement, and T-VoD systems are the most difficult to implement. PPV and Q-VoD are services like watching movies. In these c ases, a local controller, set-top-box, can filter multiple channels to achieve the service. T-VoD requires a bi-directional signal from the user to a centralised controller.
Interactive services cover a wide range of services from movies-on-demand to distance learning. Some of the basic interactive multimedia services are listed below in Table 1.
Table 1. Interactive multimedia services.
Application Description Movies-on-Demand Customers can select and play movies with full VCR capabilities Interactive video games Customer can play downloadable computer games without having to buy a physical copy of the game. Interactive news television Newscasts tailored to custome tastes with the ability to see more detail on selected stories. Interactive selection and retrieval. Catalogue browsing Customer examine and purchase commercial products. Distance learning Customers subscribe to courses being taught at remote sites. Students tailor courses to individual preferences and time constraints. Interactive advertising Customers respond to advertiser surveys and are rewarded with free services and samples. Video conferencing Customers can negotiate with each other. This service can integrate audio, video, text, and graphics.
Figure 1: System elements.
The main VoD scenario consists of a local database and server connected to the user via a communications network. The data is stored on local distribution sites which are connected through high speed backbone network to information archives and video serv ers. This distr ibution scheme serves many purposes . First, it is possible to implement it in a distributed fashion, increasing availability and reliability. Second, a provider can tailor the information delivery to the specific tastes of a user community in a partic ular geographic area, reducing costs. Third, it is easier to manage, as each local system is responsible for its own billing and accounting. Fourth, the system can be constructed in a regional, piecewise fashion.
The cost of the set-top unit must limit to reasonable price (few hundred dollars) for the VoD technology to succeed. Open and interoperable systems that let the users to subscribe to several different services are preferred.
In combination with the telephone signals, which may be analogue or digital (ISDN), control (16 and 24 kbit/s) and video (2 to 6 Mbit/s) information channel may be transmitted downstream towards the customer. In the upstream direction there are at least telephone and control channels, an d optional duplex bearers to 576 kbit/s.
Carrierless AM/PM (CAP), Discrete Multitone Transmission (DMT), and Frequency Division Multiplexing (FDM) are to be considered as the modulation techniques. ADSL systems are used as local subscriber loops with telep hone or basic ISDN access. ADSLs are not usable as Video on Demand (VoD) access networks when operating with PCM (Pulse Code Modulation) systems using multiplexing techniques in the subscriber loop, because in that case the lines are not physically switch ed.
ADSL uses relatively low bitrates and especially the return channel is quite narrow, which limits emerging new services. It is based on a star configuration using unshielded wire cables, two-wire line for each user.
Due to physical constraints, such a s a cable attenuation and frequency distortions, the ADSLs are limited in the distance which can be covered. The spanning distances related to the transmission bandwidth through a cable with diameter of 0.4 mm are the following.
On the other hand, network suppliers have to invest in some new devices. But more bandwidth is available, and customers perhaps could use the same terminal device in the POTS. If new high quality copper pairs are installed, bandwidths could be quite wide. According to ATM Forum specifications the following bitrates are available: 155 Mbit/s (UTP5) , 51 Mbit/s (UTP3) and 25 Mbit/s (IBM standard).
Channel transmission on the cable is primarily unidirectional. Signals are inserted on the downstream channels by the so-called head end. Signals from customer sites are only allowed on certain upstream channels and they are only transmitted towards the h ead end. Although there is provision for upstream me ssage transmission, many cable systems do not have the actual amplifiers and filters that are needed. In addition, the problems of signal regeneration and noise are harder in the upstream direction as multiple noise sources are merged.
Cable systems are very vulnerable to physical damage, both from ageing and wilful destruction. Although the used cables are shielded coax, there are connections. Each of these is a potential leakage source, especially as the cable becomes worse. By agein g all these weak po ints are potential noise causing spots, causing noise ingress and leakage. If the power used in the cable is high in order to limit noise ingress effects, there is more leakage. Furthermore, one big minus for CATV connections is secrecy. If there is no AT M multiplexer in the centre of the CATV star or root of the tree, all data is going in the ATM 'bus' which is available to all subscribers. This means more costs, because we have to provide means to ensure privacy in the connections, and ATM does not provid e encryption. This has to be done by other means, maybe through encryption, but this encryption only takes place in the last hop.
This is a future technology, because there are several problems to be solved like the signal structure in the transition from fiber to wireless and a wireless reverse channel. Nevertheless, there are some trials for wireless TV distribution network. First , British Telecoms Millimetre-wave Multichannel Multipoint Video Distribution Service (MMDS) worked at 29 GHz , and second Cellular Visions commercial offering in the New York area under a U.S. FCC pioneer licence in the 27.5 to 29.5 GHz band .
The video material can be stored on a combination of magnetic or magneto-optic disks, and magnetic tape devices. Different storage media offer different memory bandwidth for VoD services. The more popular movies are stored on RAM, the less popular ones on hard disk and the least popular on tertiary storage. Optical system like a CD-ROM and magnetic system like a tape drive will be possible media for inexpens ive tertiary storage for archival purposes. This kind of storage system reduces operating costs and can offer a wide selection of programmes to customers. In contrast, the downloading and caching of complete programme at user home has many disadvantages. First, communication bandwidth is often so limited that the user must wait for most of the information to arrive. This delay could be unacceptable. The customers equipment will cost too much, because caching an entire programme needs a large amount of mem ory. Finally, information providers do not accept that their programmes are available for data duplication and piracy. Table 2 gives the estimated costs for different storage device types. It has been assumed that a 90-minute movie requires about 1 Gbyte of storage using MPEG1 compression.
Table 2. Costs for different storage device types.
Storage type Cost/Mbyte Sessions/device Cost/movie stored RAM 50.00$ 200 50,000$ Hard disk 0.50$ 5 500$ R/W optical 0.20$ 2 200$ Magnetic tape 0.01$ 1 10$
A metadata server  is a database system which manages the metadata information. It contains abstract information about the location and characteristics of the data to be retrieved. The user can look for the information summary and select the programme to get it to the home.
The TUT ATM network utilizes mainly 155 Mbps optical interfaces. The physical terminal connections are mainly implemeted with OC3/STM1 multimode optical fibre but also some 100 Mbps TAXI interfaces are in use. ATM interface cards from Efficient Networks (ENI-155s-MF) and Fore Systems (SBA200) are utilized. These cards offer card-specific application programming interfaces providing AAL5, Classical IP over ATM (RFC1577), and raw ATM interface. LAN Emulation support will be available later.
Presently (January 1995), the network is PVC-based, but signalling (UNI3.0, Q.2931) will be taken into use as soon as it becomes available for the used equipment. SNMP-based network management is widely used in the network.
The ATM network connects a number of workstations including SPARCServer1000, SPARC20, SPARC10, DEC Alpha, and 486 EISA PC machines. The operating systems include SUN OS4.x, Solaris 2.x, and Windows.
A multimedia workstation network has been created by the Digital Media Institute. The network offers opportunity to study real multimedia traffic under real circumstances. Transmission protocols, applications, and services could be also developed in such a multimedia network environment. Furthermore, the system makes it possible for the students and researchers of the university to use distributed multimedia services in work, research, and supporting studies. Several applications and services can be supported by the server. The server can be utilized for application development, tests, and piloting.
The core of the server is SPARCServer1000 (TM) which provides file server functions, database, and computing services. This computer is based on multiprocessor technology with four processors (maximum eight) and with 256 MB of RAM memory. Server is capa ble of 2000 I/O operations per second. The operating system in the server is SOLARIS 2.3 (TM).
The server is supported with two fast SPARCStorageArrays (TM) capable of storing 30 GB of data in each array. The arrays are connected to server with high performance Fibre Channel. This Fibre Channel is a standards solutions according to ANSI Fibre Channel FC-PH Rev 4.2, X3T11-755D and ANSI SCSI Fibre Channel Protocol Rev 8, X3T10-993D. Sustained data rate is 15 MB per second, which is highly dependable on the configuration.
The server is connected through the campus ATM LAN network to the Finnish ATM network. ATM interface cards are from Efficient Networks (ENI-155s-MF), and they can provide 155 Mbps transfer rate. Several ATM cards can be connected in parallel to increase the throughput.
Presently (in January 1995) the Multimeda Server acts as a file server, but a straightforward solution to carry MPEG2 transport stream over AAL5 is under development. In the future, the server is developed in the direction of the standards for carrying audiovisual multimedia services over ATM evolving in ATM Forum and ITU-T.
The Digital Media Institute has a number of SPARC10 and SPARC20 workstations equipped with extensive range of multimedia devices and software (see Appendix B), connected with ATM to the multimedia server, for research and development activities.
TUT has experience from a variety of public-domain and commercial hardware and software components adapted to ATM. The tools include:
- Parallax video capture and compression board
- SunVideo board
- Mbone tools for videoconferencing over Internet
- Communique! videoconferencing system
- ShowMe videoconferencing system
- Bitfield BVCS Video Communication System
- Uniflix video-on-demand software
- Mosaic, World Wide Web
In the digital media pilot project, companies are going to join to the network and use different services. Working with different companies gives useful experiences and makes it possible to develop new services.
One of the field trials of VoD services is provided by Helsinki Telephone Company (HPY). The trial is concentrating on ADSL in the local loop. A local server is connected to ADSL transmission equipment, using a 2.048 Mbit/s bearer rate for a simplex downstream channel and 16 kbit/s for a duplex control channel. Telephone service (POTS) is offered to subscribers simultaneously with video transmission over the same subscriber line using low and high pass filters (POTS-splitters). At the customer premises, the ADSL equipment has interfaces for both a set-top-box and a telephone. The local server used in the field trial can also be accessed remotely via ATM-network (working association with RACE project MARS). Network evolution towards the FTTC concept is also discussed. The meaning of the trial is to investigate technical aspects of copper in the local loop, because there is need to deliver broadband services on existing local loop investment without complete replacement.
The trial has begun in 1994 by connecting to the homes of 70 employees of BT. The unit working with television has copper or fiber transmission to the switch where it is connected to the programme server. The consumers can choose entertainment and information services by controlling their home unit by remote controller.
The system is used by telephone which takes connection to the Home Shoppings call centre. Instead of speaking with a live operator, more than half of the Home Shopping Clubs customers communicate directly with the companys Unisys mainframe through their push-button phones. The computer then passes along the order to Home Shoppings warehouses.
For movies, the computer would process and send the order to the customers local cable company by satellite or fiber optic lines. The cable company would then switch on the customers converter box to start the movie. If a customer wanted to pause the movie, the film can be delayed by five-minute increments with a button on the converter box. The system is a hybrid between Pay-per-View and Near Video-on-Demand. In addition to movies, the Home Shopping technology is designed to handle special Pay-per View events, such as boxing matches and concerts.
During a three months period, reported in ADSL technical trial phase two report, a total of 268 employees participated in the technical trial. The participants reactions to the services were tested and the technical feasibility of the service in a variety of network environments was assessed. The report shows steady video usage by participants, an average of 2.6 hours per week. Participants were very pleased with the quality and reliability of the service. Although some initial problems were encountered, these decreased due to improvements made in the equipment and experience gained by the technicians and the participants themselves. Participants request more features, more options, and a greater selection of movies. Nevertheless, they feel that the service is better than or comparable to other media (cable, VCR, broadcast) and is easy to use.
To show detailed results some examples is taken from the report. Set-top boxes were found to be the source for 22% of the trouble reports. Troubles were caused by software problems and faulty power supplies. The ADSL units (both in the Central Office and in participants homes) were responsible for another 37% of the troubles. These troubles were caused by faulty power amplifiers and capacitor modifications. In addition participants were asked to comment their satisfaction of the services. An overall satisfaction with VoD services had 6.9 average rate on scale of 1 to 10, where 1 means not at all satisfied and 10 means extremely satisfied. Some negative comments concerned the set-top boxes or there was too much dialling involved. Participants were also asked had they any negative effects on the quality or reliability of their voice/data service after installation of VoD service. There were very few problems concerned voice/data services and the majority of problems reported by participants were related to video usage.
The concept is based on ATM to home with MPEG, and hybrid fiber coax transmission medium. The last mile is coax. The bitrate at customer site is 3.5 Mbit/s for video. From AT&Ts ATM switch is fed at 45 Mbits/s rate (DS3). In each node there are average 300 clients. At service provider site there will be 8 SGI Indigos servers and 16 SGI Vaults to provide total of 1.5 TeraBytes of disk capacity. The multiplexing is performed in ATM level using VPI/VCI and AAL5. This information is cell interleaved and error protected with Reed-Solomon and then modulated using 64 QAM. In addition, the data is encrypted.
The trial has just started, and services do not work without problems. For example, the remote control delay of 150 ms occurs a problem. However, the end-to-end delay of transmission is only 200 ms. The trial goes on and new solutions are researched all the time.
The MPEG standard has the three parts: video encoding, audio encoding, and systems. The system part includes information about the synchronisation of the audio and video streams. The video stream takes about 1.15 Mbit/s, and the remaining bandwidth is used by the audio and system data streams.
MPEG video encoding starts with a fairly low-resolution (352 x 240 pixels x 30 frames/s in the US; 252x 288 x 25 frames/s in Europe) video picture. RGB pixel information is converted to chrominance/luminance and a complex, lossy compression algorithm is applied. The algorithm takes the time axis as well as spatial axes into account, so a good compression ratio up to 200:1 is achieved when the picture is relatively unchanging. The compressed data contains three types of frames: I (intra) frames are coded as still images; P (predicted) frames are deltas from the most recent past I or P frame; and B (bi-directional) frames are interpolations between I and P frames. I frames are sent once every 10 or 12 frames. Reconstructing a B frame for display requires the preceding and following I and/or P frames, so these are sent out of time-order.
Substantial computing power is required to encode MPEG data in real time, perhaps several hundreds MIPS to encode 25 frames/s. Decoding is not so demanding. Finally, the quality of MPEG-encoded video is often metioned to be about the same as that of a VHS video recording.
The version 2 of the MPEG is under development and it will be completed until May 1995. The MPEG2 is designed to offer higher quality at a bandwidth of between 4 and 10 Mbit/s. This is too fast for playback from CD using todays technology. MPEG2 will compress, e.g., 720 x 480 full-motion video in broadcast television and video-on-demand applications. It has several advantages  comparing to MPEG1:
MPEG2 over ATM has several problems. For example, the transmission delay, delay jitter, and error correction are areas of intensive work today. ATM Forum SAA/AMS working groups phase 1 specification for MPEG2 over ATM for VoD service is evolving rapidly. It desires to use the ATM equipment which are available today. They want that the user needs to buy only the MPEG2 card, and so the problems are solved in the way the equipment manufacturers see it is necessary. First, the network is assumed to restrict the magnitude of the problems, and secondly, what remains is solved in the MPEG application.
This produces an obvious interoperability problem, when there are no clear specifications for the bitstream. The manufacturers could determine the price of the equipment when they sell complete systems to the users. The interoperability is optional in AMS phase 1, and it is achieved through supporting an ITU standard called H.222.1.
H.222.1 is in the ITU standards hierarchy for implementing the necessary functions needed to interface MPEG2 applications to the ATM network. It also specifies support for high-quality videoconferencing functionality, error correction, and other AALs (AAL1 and AAL2) in addition to AAL5. H.222.1 and other ITU recommendations related to MPEG2 transmission over ATM will be frozen in February 1995.
Furthermore, there are MPEG3 and MPEG4 under construction. MPEG3 will compress full-motion, HDTV-quality video. It has been thought that required data rate is 5 to 20 Mbit/s. But MPEG3 will be joined under MPEG2, because MPEG2 is flexible enough for such applications. MPEG4 is associated with narrowband channels, like mobile networks and POTS, which use small frame sizes, and require slow refreshing. Such applications will need a data rate of 9 to 40 kbit/s. For detailed information there are ISO/IEC 11172 specification for MPEG1 and ISO/IEC 13818 specification for MPEG2. 
JPEG is lossy, meaning that the image you get out of decompression is not quite identical to what you originally put in. The algorithm achieves much of its compression by exploiting known limitation of the human eye, notably the fact that small colour details are not perceived as well as small details of light-and-dark. Thus, JPEG is intended for compressing images that will be looked at by humans. If you plan to machine-analyse your images, the small errors introduced by JPEG may well be a problem for you, even if they are invisible to the eye.
A useful property of JPEG is that the degree of lossiness can be varied by adjusting the compression parameters. This means that the image maker can trade off file size against output image quality. You can make extremely small files if you do not mind poor quality; this is useful for indexing image archives, making thumbnail views or icons etc. Conversely, if you are not happy with the output quality at the default compression setting, you can jack up the quality until you are satisfied, and accept lesser compression.
Although it handles colour files well, it is limited in handling black-and-white and files with sharp edges (files come out very large). The processing costs, even on up-to-date computers, is also high. For further information, contact independent JPEG group at firstname.lastname@example.org or ftp sites like ftp.uu.net:/graphics/jpeg.
This standard is intended for carrying video over ISDN in particular for face-to-face videophone applications and for videoconferencing. Videophone is less demanding of image quality, and can be achieved for p=1 or 2. For videoconferencing applications (where there are more than one person in the field of view) higher picture quality is required and p must be at least 6.
H.261 defines two picture formats: CIF (Common Intermediate Format) has 288 lines by 360 pixels/line of luminance information and 144 x 180 of chrominance information; and QCIF (Quarter Common Intermediate Format) which is 144 lines by 180 pixels/line of luminance and 72 x 90 of chrominance. The choice of CIF or QCIF depends on available channel capacity, e.g., QCIF is normally used if p<3.
The actual encoding algorithm is similar to, but incompatible with, that of MPEG. Another difference is that H.261 needs substantially less CPU power for real-time encoding than MPEG. The algorithm includes a mechanism which optimises bandwidth usage by trading picture quality against motion, so that a quickly-changing picture will have a lower quality than a relatively static picture. H.261 used in this way is thus a constant-bit-rate encoding rather than a constant-quality, variable-bit-rate encoding.
MHEG is suited to interactive hypermedia applications such as on-line textbooks and encyclopedia. It is also suited for many of the interactive multimedia applications currently available on CD-ROM. MHEG could for instance be used as the data structuring standard for a future home entertainment interactive multimedia appliance.
To address such markets, MHEG represents objects in a non-revisable form, and is therefore unsuitable as an input format for hypermedia authoring applications: its place is perhaps more as an output format for such tools. MHEG is thus not a multimedia document processing format, instead it provides rules for the structure of multimedia objects which permits the objects to be represented in a convenient form (e.g. video objects could be MPEG-encoded). It uses ASN.1 (abstract syntax notation 1) as a base syntax to represent object structure, but allows for the use of other syntax notations. In addition an SGML (Standard Generalised Mark-up Language) syntax is also specified.
There are four types of MHEG objects, which could be textual information, graphics, video, audio, etc.:
MHEG supports various synchronisation modes, for presenting output objects in these relationships.
Since the beginning, the MHEG partners have worked together in experimental programmes based on interchange and validation of test objects and pilot implementations. OMHEGA (Open MHEG Applications) was launched by the commission of the European Communities at the beginning of 1994 with the following objectives:
Other European projects began in January 1994, such as RACE/MARS (Multimedia Audiovisual Retrieval Service) and RACE/AMMIS (Advanced Man-Machine Interface for Selection of TV Programs), using MHEG objects in Video-on-Demand applications and multimedia TV guides for digital television broadcasting.
It seems that Part I of the MHEG specification may enter the international Standard (IS) stage by the end of 1994, Part II during 1995, and Part III by the end of 1995. Reference of the MHEG is T.170 | ISO. ,
The draft provides the specifications for the controls of MPEG2 bitstreams in both standalone and distributed environments with the following characteristics:
The DSM-CC protocol provides access for general applications, MHEG applications, and scripting languages to primitives for establishing or deleting network connection using User-Network (U-N) Primitive and communication between a client and a server across a network using User-User (U-U) Primitive. U-U operations may use a Remote Procedure Call protocol. Both U-U and U-N operations may employ a message passing scheme which involves a sequence of bit-pattern exchanges.
DSM-CC may be carried as a stream within an MPEG1 System Stream, an MPEG2 Transport Stream, or an MPEG2 Program Stream. Alternatively, DSM-CC may be carried over other transports, such as TCP or UDP.
The Working Draft of ISO/IEC 13818-6 has been made in November 1994. ISO/IEC JTC1/SC29/Working Group 11 is working on DSM-CC and many issues are currently under study.
The First DAVIC Call for Proposals is an invitation to submit proposals for cross-industry and cross-national end-to-end interfaces and protocols and other technical and factual data that are relevant for a range of emerging digital audio-visual applications and services. The council invites proposals from content creators, packagers, programmers, server manufacturers, delivery media and service organisations, terminal device manufacturers, software vendors, integrators, governmental and regulatory organisations, standards bodies, research and academic organisations, industry consortia, and any individual or organisation whose interest covers digital audio-visual applications and services.
Although there is enthusiasm to deploy systems that support the widest possible range of services, the industry can be expected to grow as revenues from first services fuel continued developments. Therefore, the Call for Proposals concentrates on a subset of services and applications, i.e., Video-on-Demand and its derivatives, which are believed to be of immediate importance to DAVIC members. However, it has invited respondents to make concrete proposals on how other services can be supported as well.
DAVICs specifications aim to achieve the following:
One of the DAVIC core services that has greatly influenced the councils thinking is Video-on-Demand. Characteristics or functionalitys of the VoD Services have been identified:
The proposals have been considered after at the Tokyo meeting in 5-7 December 1994. A working reference model will be produced at the March 95 meeting, interoperability experiments will start in June and the first set of specification will be completed in December95. DAVIC is an open organisation and expressly encourages any corporate entity to become member. Inquiries on membership should be directed to Dr. Leonardo Chiariglione, e-mail email@example.com .
The SunVideo system is an essential enabling technology for implementing multimedia applications such as video conferencing, video mail, or compound document creation incorporating video clips for training or presentations.
This technology offers access to video as another data type to be manipulated by the computer for a host of applications. With a single card, video can be captured for use in presentations or as an element in a multimedia training or information system. Recipients of broadcasts of digital video for announcements or lectures do not need any special video hardware. Video conferencing applications supporting multiple simultaneous users are possible when the participants all have capture and compression capabilities, provided by the SunVideo board.
The server stores digitised video films like movies and commercials in a compressed digital format. The compressed movie data can be decompressed in real time to deliver an NTSC video output to support near Video-on-Demand, ad-insertion and other miscellaneous motion video playback applications.
The Perspective 2000 uses MPEG2 compression/expansion technology, but is not limited to the MPEG2 compression scheme. This digital technology allows each Perspective 2000 to output multiple NTSC playback channels. The MPEG2 Perspective 2000 decoder supports full decoding of MPEG2 ISO 13818 bitstreams.
The Site Manager system monitors and controls the video server from either a local or remote site. As a result, the status response from each server contains voltage levels, internal temperatures, fan speeds, diskarray status, movie playback status, buffer/decompression engine status, software and hardware revision levels and other related statistics. These data are stored in a database in the Site Manager.
Uniflix provided client-server architecture allows clients on a network to remotely access compressed video streams generated by Uniflix video servers. The servers generate the streams using either of the video cards mentioned above. Uniflix servers and clients offer an integrated Video-on-Demand software solution specially in the industry.
The SunVideo based Uniflix server is capable of compressing and transmitting video data throughout a network at a data rate from 0.128 to 2.0 Mbits/s. The latter is sufficient to deliver 30 frames/s at one-quarter NTSC resolution.
The PowerVideo based Uniflix server is capable of compressing and transmitting video data at a data rate from 0.128 to 14.0 Mbits/s. The latter is sufficient to deliver 30 frames/s at full NTSC resolution and near-broadcast quality.
Digital video/audio can be compressed and stored in a Unix file using the Uniflix capture utility. The video can be captured at up to 30 frames/s at full NTSC resolution. The captured video can be edited using the Uniflix video editor and can be made available to clients on the network via a movie-on-demand video file server.
All servers and clients support four modes of operation: Video-on-Demand, continuous multicast, broadcast, or point to point transmission/reception of the video and audio data. Any network interface on the workstation supporting the TCP/IP protocol can be used to transport the data including Ethernet, ATM, FDDI, and others. Depending on the bandwidth capacity of your network, multiple servers (including live and file servers) can simultaneously be run on it.
StarWorks client/server-based software delivers cost-effective performance for workgroup video applications. It provides reliable network delivery, and Video-on-Demand services, from a standard enterprise server and meets the needs of even high-bandwidth video. StarWorks for the Solaris software environment supports hundreds of simultaneous Windows, Macintosh, Solaris, and DOS users, who can view video applications while accessing other nonvideo applications, such as NFS servers and databases. StarWorks can even locate on the same server as other applications.
With StarWorks software, a SPARC-based server can handle over 100 gigabytes of digital video and audio storage and thousands of video and audio files. StarWorks server software is built on the Solaris computing environment to meet the real-time requirements of streaming data, yet fits into your current networking environment . It automatically maps the network topology and balances the network load. In addition, StarWorks guarantees video delivery using resource reservation and offers easy field maintenance, diagnostics, repair, and network management functions.
The StarWorks Solaris Client Development Pak provides an interface to help application and tool vendors develop total networked video solutions.
Communique! contains a suite of easily managed iconic tools for defining and initiating an on-line, real time conference with fellow workgroup members. And like the conference room, the Communique! Virtual Conference Room has the tools that help people easily exchange ideas and information. Audio Conferencing, Video Conferencing, a Shared White Board, and shared Text Tools are just some of the tools available to the conference members. In meeting we can use the tools necessary to get our point across to the participants. Since Communique! runs on our workstation, we can grab images and data from any of our workstation applications for discussion in our conference.
Handling a meeting, first the software creates connections by adding users of the software in the same local area network or users with address to the meeting. There can be up to ten participants in the same time. New users could be added or they could be removed during the meeting. Participants are shown in icons which include information of participants states; active, absent, or quitted state.
The Sun corporation has also introduced new software, called ShowMe TV. It allows users to view multiple video broadcasts, record, or edit television programmes, or transmit broadcasts to select groups for training and other corporate communications. So ShowMe TV is the networked digital video broadcast product.
Designed for Sun SPARC computers running the Solaris (TM) environment and standard TCP/IP networks, ShowMe TV consists of two primary components; receiver and transmitter. The ShowMe TV Receiver allows the user to display, contol and record program material that is broadcast over the existing local area network. The receivers integrated digital VCR capabilities enable the capture of a segment or entire broadcast. The recorded program is stored on local or remote disks for archiving or editing. The ShowMe TV Transmitter broadcasts video and audio source material over the network to any computer running Solaris and ShowMe TV Receiver software. Multiple video and audio channels can be broadcast simultaneously to everyone on a network or to individuals. A broadcast scheduler is used to enter programme and scheduling information in a programme guide that can be accessed on the network. The ShowMe TV product uses efficient compression techniques to broadcast over the existing network without disrupting normal data flow or use of the network. Adding users to an existing broadcast channel does not impact the performance of the network.
Any SPARCstation can receive and display broadcasts with the installation of ShowMe TV Receiver software. Similarly, any SPARCstation can become a broadcast node by adding a SunVideo board, ShowMe TV Transmitter software, and a video source such as a VCR, camera, or tuner. The SowMe TV Transmitter requires Solaris 2.3 while the ShowMe TV Receiver works on Solaris 2.2 or later, as well as Solaris 1.x versions.
 Thomas D.C. Little and Dinesh Venkatesh, Prospects for interactive video-on-demand, IEEE Multimedia, Fall 1994
 Dring K.-H., Extension of public networks for VoD services, Deutche Telekom FTZ, 6.6.1994
 R.D. Carver, Millimetre-wave Radio for Broadband Local Access, ICC91, Conf. Record, vol.3, pp.1187-1190, Denver Colo., June 1991.
 Cellular Vision, 300 Park Ave., New York, NY 10022.
 Jukka Behm, TEKES Tekniikan Nkalat, 4/1994
 Tsokkinen Mikko, Multimedia ATM-verkossa, Diplomity, s. 21, 1994
 Borko Furht, Multimedia Systems: An Overview, IEEE Multimedia, Spring 1994
 Francoise Colaitis, Opening Up Multimedia Object Exchange with MHEG, IEEE Multimedia, Summer 1994
 http://cui_www.unige.ch/OSG/MultimediaInfo/mmsurvey/standards.html, List of Multimedia Standards
 The Digital Audio Visual Council, DAVICs First Call For Proposals, Version 1.0, Leidschendam, 14. October 1994
 SunVideo and DigitalVideo , Technical White Paper, A Sun Microsystem Inc.
 Communique! Users Guide, Version 3.2 for use with the OpenLook Window Manager, InSoft inc.