Keynote lectures are plenary sessions which are scheduled for taking about 45 minutes + 10 minutes for questions.

 
 

 

 
 

- Fernando Pereira, Instituto Superior Técnico - Instituto de Telecomunicações, Portugal
- Anisse Taleb, Ericsson AB, Sweden

 
   
  Keynote Lecture 1  
  Multimedia Representation in MPEG Standards: Achievements and Challenges  
 
Fernando Pereira,
Instituto Superior Técnico - Instituto de Telecomunicações

Brief Bio

Fernando Pereira was born in Vermelha, Portugal in October 1962. He graduated in Electrical and Computer Engineering by Instituto Superior Técnico (IST), Technical University of Lisbon, Portugal, in 1985. He received the M.Sc. and Ph.D. degrees in Electrical and Computer Engineering from IST, in 1988 and 1991, respectively. He is currently Professor at the Electrical and Computer Engineering Department of IST. He is responsible for the participation of IST in many national and international research projects. He acts often as project evaluator and auditor for various organizations. He is a member of the Editorial Board and Area Editor on Image/Video Compression of the Signal Processing: Image Communication Journal, a member of the IEEE Press Board, and an Associate Editor of IEEE Transactions of Circuits and Systems for Video Technology, IEEE Transactions on Image Processing, and IEEE Transactions on Multimedia. He is an IEEE Distinguished Lecturer and member of the Scientific and Program Committees of tens of international conferences and workshops. He has contributed more than 180 papers to journals and international conferences. He won the 1990 Portuguese IBM Award and an ISO Award for Outstanding Technical Contribution for his participation in the development of the MPEG-4 Visual standard. He has been participating in the work of ISO/MPEG for many years, notably as the head of the Portuguese delegation, chairman of the MPEG Requirements group, and chairing many Ad Hoc Groups related to the MPEG-4 and MPEG-7 standards. His current areas of interest are video analysis, processing, coding, description, adaptation, and multimedia interactive services.

Abstract:

The fast evolution of digital technology in the last decade has deeply transformed the way by which information, notably visual information, is generated, processed, transmitted, stored, and finally consumed. The need for standards in this technological area comes from an essential requirement relevant for all applications involving communication between two or more parts: interoperability. The existence of a standard has also important economical implications since it allows the sharing of costs and investments and the acceleration of applications’ deployment. Among the most relevant standardization achievements in the area of media representation are those by ISO/MPEG and ITU-T, some of them jointly developed such as MPEG-2/H.262. Standards are typically the repositories of the best technology and thus an excellent place to check technology evolution and trends.
The ISO/MPEG standardization committee has been responsible for the successful MPEG-1 and MPEG-2 standards that have given rise to widely adopted commercial products and services, such as Video-CD, DVD, digital television, digital audio broadcasting (DAB) and MP3 (MPEG-1 Audio layer 3) players and recorders. More recently, the MPEG-4 standard is aimed to define an audiovisual coding standard to address the emerging needs of the communication, interactive and broadcasting service models as well as of the mixed service models resulting from their technological convergence. Following the same vision underpinning MPEG-4, MPEG initiated after another standardization project addressing the problem of describing multimedia content to allow the quick and efficient searching, processing, filtering and summarization of various types of multimedia material: MPEG-7. After the development of the standards mentioned above, MPEG acknowledged the lack of a “big picture” describing how the various elements building the infrastructure for the deployment of multimedia applications relate to each other or even if there are missing standard specifications for some of these elements. To address this problem, MPEG started the MPEG-21 project, formally called “Multimedia framework” with the aim to understand if and how these various elements fit together, and to discuss which new standards may be required, if gaps in the infrastructure exist.
In a similar manner, ITU-T defined standards such as H.261 and H.263 for videotelephony and videoconference over different types of channels and it has just finished developing, jointly with MPEG, the H.264/AVC (Advanced Video Coding) standard. This new standard provided significant improvements in terms of (frame-based) coding efficiency. These ISO/ITU-T joint developments highlight the convergence of technologies for media representation, independently of the transmission and storage media and, most of the times, of the application and business models involved.
Currently, MPEG and VCEG are jointly developing a scalable video coding (SVC) standard targeting a similar coding efficiency to state-of-the-art non-scalable standards and a multiview video coding (MVC) standard targeting multiview and free viewpoint applications which should provide the capability to change viewpoint freely by rendering one (real or virtual) view. Both these new standards have some degree of backward compatibility with the H.264/AVC standard.

   
  Keynote Lecture 2  
  Advances in Speech and Audio Coding and its applications for Mobile Multimedia  
 
Anisse Taleb,
Ericsson AB, Sweden

Brief Bio

Anisse Taleb recieved the M.Sc. and Ph.D. degrees from the Institut National Polytechnique de Grenoble (INPG), France, in 1996 and 1999 respectively. His Ph.D. thesis was among the first reaserch studies on blind source separation involving nonlinear mixtures. He was awarded the INPG Best Thesis Prize in 2000. After a short period, from 2000 to 2001, as a post-doctoral research fellow in the Australian Telecommunication Research Institute, Perth, he joined Ericsson as a research engineer in the field of audio technology. Among other things, he was involved in the development of new generic wideband acoustic echo-canceler for mobile phones, as well as the development of speech and audio bandwidth extension algorithms. Anisse Taleb was heavily invloved in the development and the standardization of the AMR-WB+, a low bitrate audio codec specifically targeted for mobile multimedia applications and which has been standardized by 3GPP (The third generation partnership program). He is currently actively involved in the M-PIPE 6th Framwork European Project where he works on scalable audio coding, he is also active in standardization bodies such as MPEG. His research interests include multichannel audio coding, speech and audio coding, and source coding in general.

Abstract:

Looking back at the evolution of audio compression technology, there has been tremendous progress the last two decades. With the advent of the digital signal processor, the algorithms for removing redundancy and irrelevances in speech and audio signals have become more sophisticated. The progress in speech compression technology has been strongly driven by telecommunications needs. This is especially true for the cellular phone industry with its desire to always maximize the number of channels per MHz spectrum, thus pushing the limits of speech signal compression. On the other hand generic audio compression technology has been driven by the advent of new distribution channels such as Digital Terrestrial Radio, Digital Satellite Radio and of course the Public Internet and the tremendous success of the MP3 compression format. On top of industry needs, there has been an evolution of user behavior; the users have advanced from being passive content consumers to being actively in search for new compressed online audio content and in general media content.

This talk will address the recently standardized audio codecs, the technologies behind their successful adoption and how and why it is believed that they will meet the users needs. A clear focus will be given to mobile applications, more importantly an evaluation of how the quality of these codecs is affected when the available bandwidth is reduced, this will be shown especially for audio-visual applications.