“Real benefits in real life: developing and testing state-of-the-art relevant signal processing in hearing aids”
Lars Bramsløw
Ph.D., Principal Scientist
Eriksholm Research Centre, Oticon A/S
Denmark
Modern hearing aids pack incredible signal processing and wireless features on a single-mW power budget. They must be designed to be worn all day, providing tangible end user benefits for different levels of hearing loss and interface to dedicated smartphone apps. Each hearing loss is special, and the signal processing incl. hearing loss compensation must be sophisticated and individually tailored to address the user challenges, especially communicating in noisy environments: ‘the cocktail party problem’.
Current hearing aid development is based on deep insight into hearing and hearing loss and a broad knowledge of signal processing, within dynamic range compression, noise reduction, beamforming, anti-feedback, artificial intelligence etc. All this must be integrated to a stable and pleasant audio processing system delivering optimum performance in all possible listening scenarios. The development follows a patient, iterative and pragmatic cycle. For listening evaluation in this cycle, a combination of signal metrics, informal listening and formal listening tests is applied. Traditionally, we focus on speech intelligibility in noise and sound quality, but newer outcome measures are also used, such as listening effort, attention, and physiological measurements.
Despite the hardware restrictions, the main limitation to better treatment of hearing loss todayis fundamental research on hearing, hearing loss and how it relates to the daily life of the user. Hence, more hearing research is needed to improve devices even more.
Lars Bramsløw, PhD, is a Principal Scientist at the Eriksholm Research Centre, part of Oticon. He holds an M.Sc. and a Ph.D. degree, both from Technical University of Denmark, in 1986 and 1993. The Ph.D. was carried out at Eriksholm on the topic of sound quality and neural networks.
Lars has 30 years of extensive experience in acoustics, hearing science and hearing aid research and development, including employment at House Ear Institute in Los Angeles, Eriksholm and Oticon headquarters in Smørum, Denmark. He is currently working on improved fitting of hearing aids for the individual and the application of deep learning algorithms in hearing health care. He is also the current president of the Danish Acoustical Society.
“AI and IoT for future generation of medical delivery with drones”
Giuseppe Tortora
Founder and CEO of ABzero
ABzero
Italy
Applications of Artificial Intelligence algorithms can be useful for real life medical scenarios.
In this tutorial, drone delivery applications driven by AI are presented as case studies at hospital facilities.
The experience of an innovative solution of drone delivery originated as a technology transferred from the Smart Medical Theatre Lab to an innovative Italian startup company, ABzero will be reported.
Research and innovation came together in joint collaborations between universities and companies, as was the case with University of Poznan, in which Artificial vision algorithms have been improved for being applied to the innovative delivery system in order to improve it.
With an application-oriented and entrepreneurial approaches, this workshop will introduce the concepts of drone delivery of medical materials, such as blood, blood components and organs.
Giuseppe Tortora, founder and CEO of ABzero, an Italian startup company. After a university and research training at the Scuola Superiore Sant'Anna, Pisa, Imperial College London, Carnegie Mellon University Pittsburgh, in the fields of medical robotics, minimally invasive robotic surgery, microrobotics, Giuseppe has gained entrepreneurial experience since 2014. He was awarded with many international awards for entrepreneurs. Giuseppe Tortora had experience in a previous medical startup.
“From embedded to AI-powered Cyberphysical systems and beyond”
Ass. Prof. Christos P. Antonopoulos
Electrical & Computer Engineering Department
University of Peloponnese
Greece
Cyber-physical Systems of Systems (CPSoS) are large complex systems where physical elements interact with many distributed computing elements and human users. The main challenges associated with the design operation of reliable connected CPSs stream from continuously increasingly demands for resources availability, novel service of high added value and product quality, while adhering to requirements for low cost, low power and competitiveness in the global market. This paper puts forward a set of CPSoS systems aiming at developing models, architectures and software tools for allocating computing resources to CPS end devices, while autonomously detecting what cyber-physical processes will be performed by the device components including heterogeneous CPU elements and communication interfaces, GPUs, FPGA fabrics, and Software stacks. Collaborating in a decentralized way and ensuring the exchange of tasks and data without central intervention, the proposed projects are leveraging state of the art artificial intelligence technologies to improve their performance, reliability, robustness as well as security.
Ass. Prof. Christos P. Antonopoulos received his Diploma and PhD at Electrical Engineering and Computer Technology from ECE Department of Electrical Engineering and Computer Technology, University of Patras Greece in 2002 and 2008 respectively. From 2019 he holds the position of Assistant Professor at the ECE Department of University of the Peloponnese, Greece. From 2002, he has been involved in more than 15 European projects (FP5,6,7 and Horizon 2020) and more than 6 Greek National research projects holding key positions both technical as well as managerial while he has participated in preparation of a high number of research project proposals.He has published high number (>100) of research papers and 13 book chapters in international journals and conferences which have received over 900 citations while he has served as editor to two Springer books. From 2005 he has worked and collaborated as Senior Researcher for several research institutions (Industrial System Institute (ISI) Patras, ECE Dept University of Patras, Technological and Educational Institute of Western Greece, Hellenic Open University) and industrial companies (INTRACOM-Telecom, NOESIS). Over the years, he has reviewed a high number of submitted paper for high quality IEEE, ACM, Elsevier, Springer and Hindawi, MDPI Journals and conferences and he has served as organizer, PC/TC member and session chair in several prestigious international conferences. Additionally, from 2003 he has worked continuously as adjunct assistant professor for various departments at Greek Universities and Technological Institutes.
His main research interests include: Wireless Networks, Network Communication Protocols, Network Simulation, Performance Evaluation, Embedded Software, Sensor Networks, Wireless Broadband Access Networks, Embedded System Architecture and Programming, Cross-Layer protocols, Wireless Networks power optimization, Internet of things, Cyberphysical systems.
“Red Pitaya: Empowering signal processing education and research journey”
Miha Gjura
Technical Specialist
Red Pitaya
Slovenia
Join us for an engaging tutorial highlighting Red Pitaya as the perfect tool to accompany students on their signal processing education and research journey. Designed for signal processing academics, researchers, and educators, this session will showcase the user-friendly and practical applications of Red Pitaya, enabling students to develop expertise in signal processing.
Discover how easy it is for students to get started with Red Pitaya, as we walk you through the seamless connection via a browser and SSH. Witness the platform's capabilities in action as students make their first signal acquisitions using the Red Pitaya Oscilloscope and signal generator.
The ability to perform signal acquisition and generation through SCPI commands empowers students to set up custom signal acquisition with confidence, providing a solid foundation for their signal processing exploration.
As students progress in their journey, they can unleash the full potential of Red Pitaya by programming its FPGA using Vivado. This opens up exciting opportunities for advanced signal processing projects and research endeavors.
Get inspired with real-world examples of projects that highlight Red Pitaya's adaptability, from radars to other innovative applications. Red Pitaya serves as the ideal companion for students as they undertake research projects, advancing their signal processing expertise.
For educators, Red Pitaya offers a powerful teaching tool to engage students at every level of their signal processing learning. For researchers, Red Pitaya provides a versatile and capable platform to explore cutting-edge signal processing applications.
With informative demos and an interactive Q&A, this online presentation is a must for educators seeking powerful teaching tools and researchers looking for a reliable companion in their signal processing investigations. Embrace the future of signal processing education and research with Red Pitaya, empowering students and researchers to excel in this dynamic field.
Miha Gjura is a Technical Specialist at Red Pitaya, a Slovenian company renowned for developing versatile open-source software-defined instruments and FPGA development platforms. His expertise lies in OS testing, Python programming, FPGA, and C development, enabling significant contributions to enhance and optimize Red Pitaya's innovative solutions. Miha's dedication to education is evident in preparing workshops for students using the Red Pitaya platform, empowering the next generation of engineers and scientists with practical insights. He recently led a hackathon challenge at UCSD, showcasing the innovative capabilities of Red Pitaya's products and fostering exploration of new technological frontiers. Furthermore, Miha serves as a valuable resource for Red Pitaya users, offering troubleshooting assistance and support to maximize the potential of their devices in projects and research endeavours.
“Memories of The Future”
Prof. Thomas M. Coughlin
President, Coughlin Associates
United States
Digital storage and memory are critical elements in modern data processing, enabling training of large AI models, scientific and engineering data analysis, smart factories and most consumer applications. This talk will discuss available storage and memory technologies (DRAM, SRAM, NAND Flash, SSDs, HDDs, Magnetic Tape and Optical Recording as well as emerging non-volatile memory technologies), including how they are used and projections for their future of digital storage and memory.
Tom Coughlin, President, Coughlin Associates is a digital storage analyst and business/ technology consultant. He has over 40 years in the data storage industry with engineering and senior management positions. Coughlin Associates consults, publishes books and market and technology reports and puts on digital storage-oriented events. He is a regular contributor for forbes.com and M&E organization websites. He is an IEEE Fellow, 2023 IEEE President Elect, Past-President IEEE-USA, Past Director IEEE Region 6 and Past Chair Santa Clara Valley IEEE Section, and is also active with SNIA and SMPTE. For more information on Tom Coughlin go to www.tomcoughlin.com.
GUEST SPEAKERS - TUTORIALS GIVEN ON FORMER SPA CONFERENCES
Orthogonal filtering consists in using orthogonal or unitary transformations to process data, in order to extract interesting characteristics or to achieve desirable control tasks. It is the basic step in many well-conditioned numerical operations on data, in particular quadratic optimal control, Kalman filtering, smoothing, spectral factorization, selective filtering, feature extraction, back propagation etc. The simplest numerical realization of an orthogonal filter is known as `QR-factorization’, while in classical filtering theory it is known as `minimal phase extraction’. Mathematicians often use the term `inner-outer’ and `outer-inner’ factorization, dating back to Hardy-space theory in the early 20th century. The technique is very generally applicable, even in time-variant and non-linear situations (although the latter has not been fully explored so far.) Conversely, it turns out that many classical problems that have been treated with `brute force methods’ can very elegantly and even simply be solved just using orthogonal filtering directly. The tutorial will show how that works.
Patrick Dewilde (EE `66 KULeuven, Belgian Lic. Math. `68 and PhD `70 Stanford University) has been a professor in electrical engineering at the Technical University of Delft for 31 years, director of the Delft Institute for Micro-electronics DIMES for ten years, chairman of the Technology Foundation STW (a major Dutch research funding agency) for eight years and director of the Institute of Advanced Study of the TU Munich for five years. His research, published in the international scientific literature and some books, has focused on mathematical issues related to the design, control and operations of dynamical systems in general and in particular circuits and systems for signal processing. He is an IEEE Fellow since 1981, an elected member of the Dutch Royal Academy of Arts and Science, has been elevated to the rank of Knight of the Dutch Lion and is presently a honorary professor both at the Technische Universität München and the Technical University of Wroclaw.
“On Digital Signal Processing for FMCW-MIMO-based Radar-Sensors”
Prof. Dr. Marcel Rupf
Institute of Signal Processing and Wireless Communications (ISC)
ZHAW School of Engineering
Switzerland
Today, FMCW-MIMO-based (mm-wave) radar sensors are widely used in automotive and industry because the FMCW-technique has some important advantages over the pulse-echo principle. In this tutorial lecture, it will be shown, that the range, the doppler speed and the angle-of-arrival of multiple targets can be determined efficiently by using FFTs. Additionally, some more challenging topics like clustering, tracking and classification of targets will be addressed based on classical DSP-algorithms and Deep Neural Networks (DNN). Finally, some radar sensing applications will be shown, e.g. for bird migration monitoring, vehicle tracking and gesture recognition.
Prof. Dr. Marcel Rupf Academic Education
1987 Diploma in Electrical Engineering from ETH Zürich, Dipl. El. Ing. ETH
1993 PhD in Technical Science at ETH Zürich, Advisor: J.L. Massey
1993-1995 Postdoc at the IBM Research Lab in Rüschlikon, Switzerland Work Experience in 3 Companies (Region of Zurich)
1995-2003 Consultant, Project Manager and Head of R&D in Mobile Communications Zurich University of Applied Science (ZHAW)
since 2003 Lecturer for Digital Signal Processing and Wireless Communications
2007 Establishing Center for Signal Processing and Wireless Communications ZSN
2007-2017 Head of ZSN
2017 Establishing Institute of Signal Processing and Wireless Communications ISC (http://www.zhaw.ch/isc/) out of ZSN
since 2017 Head of ISC
“Towards searching the Holy Grail in automatic music and speech processing – examples of the correlation between human expertise and automated classification”
Prof. Bożena Kostek
Audio and Acoustics Laboratory
Faculty of Electronics, Telecommunications, and Informatics
Gdansk University of Technology
Poland
This tutorial lecture is dedicated to addressing the still not fully achieved potential in automatic music and speech processing. There are parallels between music and speech as both are audio signals; however, they differ much in detail. One of such parallels are automatic music transcription and text-to-speech technology; both are very advanced in research and technology but suffer from challenges related to polyphony and differentiated articulation in music, whereas in speech: differentiated accents, L2 speakers’ pronunciations, diarization, just to name a few, cause problems. Another challenge concerns affective computation and predicting emotions in music and speech with sufficient accuracy. These challenges lie in the notion that we would like to get better results with machine learning-based classification than human expertise could deliver.
In this lecture, first, bases of automatic music and speech processing are to be shortly reviewed. Then, the state-of-the-art in machine learning, including both baseline and deep learning methods, is briefly examined in the context of music and speech processing. Also, advances in available technology related to such issues are presented. Finally, examples of research performed in the discussed areas are shown.
Bożena Kostek is a professor and head of the Audio and Acoustics Laboratory at Gdansk University of Technology (GUT), Poland. She is a corresponding member of the Polish Academy of Sciences, a Fellow of the Acoustical Society of America, and the Audio Engineering Society. She published more than 600 scientific papers in journals. She led many research projects. Prof. Kostek acted as Editor-In-Chief for the Archives of Acoustics (JCR journal), the Journal of the Audio Engineering Society (JAES), and in 2013 she was appointed as Associate Editor of the IEEE Transaction of Audio, Speech and Language. Under her guidance, 18 Ph.D. students supported their doctoral theses and 280 M.Sc. and Eng. works. She is the recipient of many prestigious awards, including two 1st prizes of the Prime Minister of Poland, several prizes of the Minister of Science, and the award of the Polish Academy of Sciences.
“Blood-vessels lumen geometric modeling and quantification from 3D images”
Prof. Andrzej Materka
Institute of Electronics
Lodz University of Technology
Poland
Vascularity diseases comprise one of the major health problems worldwide. Their diagnosis, therapy planning, treatment, and etiology understanding require personalized geometrical modeling of the vessel trees and measurement of their parameters, e.g. local lumen radius, volume of calcification or centerline course. Arteries’and veins’ wall surface models are essential for visualization and for blood flow numerical simulation – in support ofbiomedical research and education.Medical imaging is the main technique that makes those measurements possible in a least invasive way, with 3D magnetic resonance (MR) and computed tomography (CT) modalities being the most popular. This tutorial is focused on methods of automated processing of 3D images for objective quantification of blood vessel lumen. One of the aims of the search for such methods is to release medical experts from the tedious, time-consuming (and overall subjective) annotation of the enormous number of voxels in 3D space. Other expectations include better repeatability, objectivity and accuracy/precision of the obtained diagnostic data and shorter time of their extraction from images. Accurate segmentation of the vascular objects in the image is a challenging task – the images contain noise, artefacts, the vessels feature high shape complexity and are closely surrounded by other tissues and organs of similar appearance. The vessel branches’ diameter take values in a wide range – from tens of millimeters to tens of micrometers – while the resolution of images is limited and can not be improved without increasing the noise level or time of examination. The noise introduces uncertainty to segmentation results, expanded acquisition time may result in significant artefacts caused by movement of organs and patient body. A widely adopted solution – acquiring images composed of anisotropic voxels (small in-plane elements extending deep into a few times thicker slice) – is a compromise which does not help in resolving details of 3D shapes. Superresolution preprocessing might be helpful in reducing the segmentation errors in such cases. There are two general approaches to vessel segmentation – via 2D cross-sections perpendicular to the vessel centerline and by direct 3D volumetric segmentation. The first one is usually preceded by image “vesselness” filtering to define the centerline, the second may use long-lasting iterative computations, e.g. those of the level-set algorithm. There is an open-source and free software available for 3D vascular image segmentation, e.g. SimVascular or ITK-Snap, as well as a few free-access databases of annotated vasculature images. Reference to those resources and examples of their use will be provided. Centerline-based methods and code, developed at TUL Institute of Electronics in collaboration with researchers from Jena University and Medical University of Łódź will also be illustrated. Various schemes of 2D cross-sections segmentation and quantification will be compared, with application to computer-simulated 3D images, MR images of 3D-printed vascularity models, human coronary arteries visualized in X-ray CT volumes, and 3D MR images of human brain blood vessels.
Andrzej Materka received the M.Sc. degree from Warsaw University of Technology, the Ph.D. degree from Łódź University of Technology (TUL), and the Dr.Hab (D.Sc.) degree from Wrocław University of Technology, respectively in 1972, 1979, and 1985. In 1996 he was awarded the title of professor in technical sciencesby the President of Poland.
In 1972-74 he was an engineer at R&TV Broadcasting Stations, Łódź, and since 1974 he has been employed at TUL Institute of Electronics, serving as its director in 1995-2015. In 1980-82 he was a Monbusho scholar at Shizuoka University, Hamamatsu, Japan, and in 1992-94 worked as a senior lecturer at Monash University, Faculty of Electrical and Computer Systems Engineering, Melbourne, Australia. In 2002-8 he was a dean of TUL Faculty of Electrical and Electronic Engineering. His research interests include numerical modeling of semiconductor devices, microwave techniques, medical electronics, digital signal processing and image analysis, artificial neural networks, brain-computer interfaces and geometric reconstruction of bloodvessels from images. Professor Materka published over 200 technical articles and 6 monographs, cited over 1200 times (h=18). He is a Life Senior Member of the IEEE.
“Data-driven Simulation”
Prof. Sanja Lazarova-Molnar
Institute of Applied Informatics and Formal Description Methods
Karlsruhe Institute of Technology, Karlsruhe
Germany
Simulation models are often developed manually with significant dependence on expert knowledge. Nowadays, however, availability and prevalence of data has the potential to completely transform these traditional simulation modeling processes. The advances in data and process mining combined with data modeling techniques promise much in this direction. What can be automated and what has to be in the hands of experts in the data-driven simulation modeling? The tutorial will focus on these questions and provide directions and answers to addressing them.
Sanja Lazarova-Molnar is a Professor at the Institute of Applied Informatics and Formal Description Methods, Karlsruhe Institute of Technology. She is also a Professor at the University of Southern Denmark, where she leads the research group Modelling, Simulation and Data Analytics. She is a Senior Member of The Institute of Electrical and Electronics Engineers (IEEE), and currently serving as Director-at-Large on the Board of Directors of The Society for Modeling & Simulation International (SCS). Furthermore, she is Chair of IEEE Denmark and Vice-Chair of IEEE Denmark Women in Engineering Affinity Group. As of latest, her research focus is on data-driven simulation in various contexts and for the various simulation paradigms, and reliability modeling of cyber-physical systems, such as manufacturing systems and smart buildings. Her email address is sanja.lazarova-molnar@kit.edu.
GUEST SPEAKERS - TUTORIALS GIVEN ON IEEE SPA 2020
“The Victory of Orthogonality”
Prof. Gilbert Strang
Department of Mathematics
Massachusetts Institute of Technology
United States
The equation Ax = 0 tells us that x is perpendicular to every row of A – and therefore to the whole row space of A. This is fundamental, but singular vectors v in the row space do more. (1) the v’s are orthogonal (2) the vectors u = Av are also orthogonal (in the column space of A). Those v’s and u’s are columns in the singular value decomposition AV = UΣ. They are eigenvectors of ATA and AAT , perfect for applications.
We can list 10 reasons why orthogonal matrices like U and V are best for computation — and also for understanding. Fortunately the product of orthogonal matrices V1 V2 is also an orthogonal matrix.
As long as our measure of length is ||v||2 = v12 + ... + vn2, orthogonal vectors and orthogonal matrices will win.
Gilbert Strang was an undergraduate at MIT and a Rhodes Scholar at Balliol College, Oxford. His Ph.D. was from UCLA and since then he has taught at MIT. He is a member of the National Academy of Sciences. His classes are the most watched on MIT’s opencourseware ocw.mit.edu and his books include Introduction to Linear Algebra (5th edition) and the 2019 book Linear Algebra and Learning from Data : math.mit.edu/learningfromdata. The new book for 2020 is Linear Algebra for Everyone.
“Flexible hardware architectures for robust Cyberphysical systems”
Prof. Dr.-Ing. habil. Michael Hübner
Computer Engineering Group
Brandenburg University of Technology - Cottbus - Senftenberg
Germany
Cyberphysical systems need to handle complex applications in different domains. The challenge is the parameterization and compositions of resources of such systems which needs to be done traditionally at design time. This requires to explore a large design space and is therefore not leading to an optimized setup. Run-time adaptive systems can find a optimal point of operation at run-time which is an advantage. This talk shows possible architecture which allow such a flexibility and discusses future solutions.
Prof. Dr.-Ing. habil. Michael Hübner (male) (IEEE Senior Member) is full professor at the Brandenburg University of Technology - Cottbus - Senftenberg. He is leading the Computer Engineering Group since october 2018. From 2012-2018 he led the Chair for Embedded Systems for Information Technology (ESIT) at the Ruhr University of Bochum (RUB). He received his diploma degree in electrical engineering and information technology in 2003 and his PhD degree in 2007 from the University of Karlsruhe (TH). Prof. Hübner did his habilitation in 2011 at the Karlsruhe Institute of Technology (KIT) in the domain of reconfigurable computing systems. His research interests are in reliable and dependable reconfigurable computing and particularly new technologies for adaptive FPGA run-time reconfiguration and on-chip network structures with application in automotive systems, incl. the integration into high-level design and programming environments. Prof. Hübner is main and co author of over 200 international publication. He is in the steering committee of the IEEE Computer Society Annual Symposium on VLSI, organized more than 25 events like workshops and symposia, and is active as guest editor in many journals like e.g. IEEE TECS, IEEE VCAL and IEEE TNANO.
“The SMART4ALL toolbox for boosting technology and business development in South, Eastern and Central Europe”
Prof. Georgios Keramidas
Aristotle University of Thessaloniki
Embedded Systems Design and Applications Laboratory
University of Peloponnese
Greece
SMART4ALL is a four-year Innovation Action project funded under Horizon 2020 framework under call DT-ICT-01-2019: Smart Anything Everywhere – Area 2: Customized low energy computing powering CPS and the IoT. The target of the project is to establish a unique Pan European network of Digital Innovation Hubs that will not only support innovation and reveal business opportunities across South, Eastern and Central Europe, but will build capacity via the development of self-sustained, cross-border pathfinder application experiments. The project will provide a total funding of 2,2 Mio Euros via 9 open calls and it will support 88 cross-border pathfinder application experiments from European consortia. Each experiment will get funding up to €80,000 and it will be supported with novel coaching services from world lead experts in ethics, technology, funding and business development. Apart from this, SMART4ALL sets forward a unique concept called Marketplace-as-a-Service (MaaS). Maas is a one-stop-smart-shop for startups, SMEs and slightly bigger companies. This presentation will introduce SMART4ALL and it will concentrate on the “Prepare for Growth” services offered by Maas.
Georgios Keramidas is an assistant professor at the Aristotle University of Thessaloniki and a senior researcher at the Embedded Systems Design and Applications Laboratory at the University of Peloponnese, Greece. Dr. Keramidas serves as the technical coordinator of the SMART4ALL project. His main research interests are in the area of low-power processor/memory design, multicore systems, VLIW/multi-threaded architectures, network and graphic processors, reconfigurable systems, power modelling methodologies, FPGA prototyping, and compiler optimizations techniques. He has published more than 70 papers, two books, and he also holds 10 US patents (4 more patents are under evaluation). His work received more than 1000 citations. During the last years, Dr Keramidas has participated in nine research projects funded by European Commission and in five national projects either as project coordinator, technical coordinator, work packager leader, or as senior researcher. Dr. Keramidas is a member of ACM, a regular reviewer and program committee member in high-quality conferences, workshops and transactions and a member of the HiPEAC European Network of Excellence.
“SMART4ALL Open Calls, the right funding instrument to boost technology and business development in South, East and Central Europe”
Mr Antonio Montalvo
Telecommunications Engineer & MBA
Senior Project Manager at FundingBox Accelerator
Poland
SMART4ALL is a four-year Innovation Action project funded under Horizon 2020 framework under call DT-ICT-01-2019: Smart Anything Everywhere – Area 2: Customized low energy computing powering CPS and the IoT. The target of the project is to establish a unique Pan European network of Digital Innovation Hubs that will not only support innovation and reveal business opportunities across South, Eastern and Central Europe, but will build capacity via the development of self-sustained, cross-border pathfinder application experiments. The project will provide a total funding of 2,2 Mio Euros via 9 open calls and it will support 88 cross-border pathfinder application experiments from European consortia. Each experiment will get funding up to €80,000 and it will be supported with novel coaching services from world lead experts in ethics, technology, funding and business development. This presentation will describe in detail the Open Calls, including their value proposition, the processes for eligibility (who can apply), evaluation and selection, the financial support and how to submit an application.
Antonio M. Montalvo (male) Project Manager at FundingBox. He holds a Master Degree in Telecommunications Engineering from the UPM (Universidad Politécnica de Madrid) and a Master in Business Administration from the Universidad PontificiaComillas. He started his career in the telecommunications sector, working from technical to managerial positions, in Europe and the United States. He participated in the EU-funded project which developed the High-Definition TV system for Europe. Later he moved to Latin America, where he was involved in the management of private and public projects for telecom, transport and housing infrastructure in countries such as Mexico, the Dominican Republic and Costa Rica. Back to Europe, during the last years, he has worked as Project Manager for different EU-funded projects, such as H2020-732947 frontierCities2, a FIWARE accelerator where he coordinated the grant management and financial support to third parties. He is currently managing three H2020 projects, dealing with the Open Call management: H2020-824964 DIH2, about the implementation of robotics for the agile production in manufacturing companies; H2020-76742 L4MS with the same aim as DIH2 but specifically devoted to logistics companies; and H2020-872614 SMART4ALL, about the technology transfer of any smart domain between countries of South, East and Central Europe.
GUEST SPEAKERS - TUTORIALS GIVEN ON IEEE SPA 2019
“Open Access infrastructures and services for the next generation of ambient assisted living environments”
Prof. Nikolaos S. Voros
Embedded Systems Design and Application Laboratory
Department of Electrical and Computer Engineering
University of the Peloponnese
Greece
Ambient Assisted Living (AAL) environments are rapidly shifting towards ICT based solutions. In this context, a wide range of consumer electronics technologies come into play ranging from robotics, embedded systems, sensors and communication infrastructures. Open access infrastructures and services can play a key role in the wide adoption of the next generation AAL coaching systems. In that respect, we present as service-oriented, easily expandable design paradigm that emphasizes on how heterogenous COTS ICT technologies can be used as enablers of existing and new AAL services. The services focus on Activities of Daily Life (ADL) monitoring algorithms, on facilitating the indoor everyday life activities of the end user emphasizing on energy multifaceted conservation and security provision.
Prof. Nikolaos S. Voros received his Diploma in Computer and Informatics Engineering, in 1996, and his PhD degree, in 2001, from University of Patras, Greece. His research interests fall in the area of embedded and cyberphysical system design. Prof. Voros is Associate Professor at Technological Institute of Western Greece, Department of Computer and Informatics Engineering where he is leading the Embedded Systems Design and Application Laboratory, which is a validated DIH of European Commission. During the last years, Prof. Voros has participated in more than 20 research projects funded by European Commission either as project manager or as scientific advisor. He has served as scientific coordinator of FP7 STREP Project 287733 ALMA and Horizon 2020 Project ARGO, and as ICT coordinator for FP7 STREP Project 287720 ARMOR and Horizon 2020 Project RADIO. Prof. Voros has published significant number of books and referred articles in international journals and conferences, while he is also editor of the books «System Level Design with Reuse of System IP», «System«System Level Design of Reconfigurable Systems-on-Chip», «Components and Services for IoT platforms: paving the way for IoT standards», «Advances in Aeronautical Informatics: Technologies Towards Flight 4.0» and «RADIO - Robots in Assisted Living: Unobtrusive, Efficient, Reliable and Modular Solutions for Independent Ageing» published by Springer. He was the General Chair of the IEEE Computer Society Annual Symposium on VLSI, which took place in Kefalonia, Greece on July 2010, while he also served as Program Co-Chair for 24th International Conference on Field Programmable Logic and Applications (FPL 2014), which took place at Technical University of Munich on September 2nd – 4th, 2014. On May 2018, he organized the 14th International Symposium on Applied Reconfigurable Computing which took place in Santorini, Greece.
“Low-Power Methods for Fault Detection and Correction in Logic and Flip-Flops. Architectures and Limitations”
Prof. Dr.-Ing. Heinrich Theodor Vierhaus
Fachgebiet Technische Informatik
Brandenburg University of Technology Cottbus-Senftenberg
Germany
Integrated circuit technology has fought against two specific “walls” during the last decade. First it was the “power wall”, followed by the “reliability wall”. Unfortunately, these walls are not independent, resulting in the need for on-line fault detection and correction at minimum cost in power. Duplication and even more triplication of hardware, in other words double-and triple modular redundancy, are heavily used today, since they are robust in terms of fault duration and timing in general, and they work without costly and complex re-design in modules such as embedded processor cores. Alternative methods have been developed, which detect faults by observing signal transitions by the wrong time, specifically for delay faults in logic and radiation-induced faults in flip-flops and registers. There are ways of combining these approaches with a promise for fault detection and correction at minimum cost in power. The tutorial introduces such techniques and shows their limits. Also combinations with more robust techniques are possible for so-called “self-dual” circuits. Finally the aspect of system level error resilience is addressed.
Heinrich Theodor Vierhaus received a diploma in electrical engineering from Ruhr-University Bochum (Germany) in 1975.
Lecturer for RF technology and electronic circuits with Dar-es-Salaam Technical College in Tanzania (East Africa) from 1975 to 1977.
Dr.-Ing. in EE from University of Siegen in 1983.
Senior researcher with GMD, the German National Research Institute for Information Technology from 1983 to 1996.
Professor for “Technische Informatik” (computer engineering) at Brandenburg University of Technology Cottbus since 1996.
He has authored or co-authored about 250 papers in the area of computer engineering, mainly related to IC test, technology, dependability and fault correction. He also contributed to three books in the area as an editor and an author. Since 2009, he has initiated and coordinated several projects on advanced education of doctoral students in the area of dependable systems, based on an international network of universities and research institutes.
“Adaptive computing for flexible, resilient and robust embedded systems”
Prof. Dr.-Ing. Michael Hübner
Fachgebiet Technische Informatik
Brandenburg University of Technology Cottbus-Senftenberg
Germany
The talk presents current and novel research about adaptive computing systems. These systems are able to adapt to the requirements of algorithms or quality of service requests from the user during run time. Additionally, next generation of embedded systems need to provide a high robustness e.g. by correcting faults which can occur on chip level. This can be achieved by a increased resilience.
Prof. Dr.-Ing. Michael Hübner has been working for over a decade in several domains, e.g. reconfigurable computing or embedded systems for Internet of Things. He is main- and co-author of over 300 scientific publications in international conference proceedings and journals. He is inventor and coinventor of over 12 patents. He is active in national and international scientific projects (EU, DFG, BMBF, DAAD).
“Machine learning for embodied agents: from signals to symbols and actions”
Prof. Piotr Skrzypczyński
Division of Control and Robotics
Institute of Control, Robotics and Information Engineering
Faculty of Electrical Engineering
Poznan University of Technology
Poland
The aim of this tutorial lecture is to show the role of machine learning and
some other AI-related techniques in embodied autonomous agents, and autonomous robots in particular.
In this tutorial we bring to the forefront the aspects of robotics that are
closely related to computer science. We believe that the progress in algorithms
and data processing methods together with the rapid increase in the available
computing power were the driving forces behind the successes of modern robotics in the last decade.
During this period robots of various classes migrated from university laboratories
to commercial companies and then to our everyday life, as now everybody can buy an
autonomous vacuum cleaner or lawnmower, while self-driving cars and drones for
goods delivery are waiting for proper legal regulations to enter the market.
Robotics and Artificial Intelligence already went a long path of mutual inspiration
and common development, starting from the symbolic AI (aka Good Old-Fashioned Artificial Intelligence)
and its extensive use in early autonomous robots, such as Shakey the robot,
created in SRI International by Nils Nilsson, considered one of the "fathers" of modern AI.
We briefly characterize the range of the most important applications of typical AI methods
in modern robotics, including motion planning algorithms [2,3], interpretation of sensory data
leading to creation of a world model [4,5], and classical learning methods, such as reinforcement learning [6].
However, what made robotics a part of the new wave of AI applications was the recent "revolution" of machine learning,
mostly grounded in the enormous success of the deep learning paradigm and its many variants
that proved to outclass classic methods in a broad range of problems related to the processing of images and other types of signals.
The quick adoption of the recent advances in Machine Learning (ML) in robotics seems to be motivated
by the fact that ML gives the possibility to infer solutions from data, as opposed to the
classic model-based paradigm that was for decades used in robotics.
Whereas the model-based solutions are mathematically elegant and theoretically provable
(with respect to stability, convergence, etc.) they often fail once confronted with real-world problems
and real sensory data, as their underlying mathematical models are only a very rough approximation of the real world.
Therefore, a wider adoption of ML in robotics gives a chance to make robots more robust and adaptive.
On the other hand, we should try to use the new techniques without discarding the knowledge and expertise
we already have - machine learning methods can benefit a lot from the prior knowledge and
the known structure of the problem that has to be solved by learning.
This knowledge and structure can be adopted from the model-based methods that a re already well-established in robotics.
In the lecture robots are understood in a broad sense, as all embodied agents that
have means to physically interact with the environment.
They can be either manipulators, mobile robots, aerial vehicles, self-driving cars,
and various "smart" devices and sensors.
In the second part of the lecture attention is paid to specific problems that appear in
application of machine learning to embodied agents, such as the need to search a for solution in
huge, multi-dimensional spaces ("curse of dimensionality"), and the ever-present
problem of representation and incorporation of uncertainty in the processing of real-world data.
Some examples of applications of autonomous robots are given, which were successful
due to the use of AI - in particular the probabilistic representation of knowledge and machine learning.
The most prominent examples are the DARPA competitions: "Grand Challenge", "Urban Challenge"
and "Robotics Challenge" (DRC), and the "Amazon Picking Challenge", which proves the
interest of large corporations in the development of AI-based robotics [7].
In the third part of the lecture new research directions offered by machine
learning and the increased availability of training data are discussed.
An overview of the most popular application areas of ML in robotics and other
autonomous systems is presented along with the typical machine learning paradigms
applied in these areas.
The focus is on deep learning, mostly using convolutional neural networks
to process various sensory data.
We discuss three aspects of embodied agents that make machine learning in robotics
quite specific with respect to other application areas, such as medical images or
natural language processing.
The first aspect is dealing with the "open world", in which autonomous robots usually operate.
This situation breaks the assumptions underlying some popular ML methods,
and creates the need to face the problem of unknown classes identification [8]
incremental learning [9], and the uncertainty of sensory data [10].
We also stress out that an embodied agent has the ability to actively acquire information [11].
The second aspect is the inference about the scene seen by the agent, where in the case of robotics,
semantics and geometry intermingle [12], because the robot has to work in a three-dimensional world,
although it often perceives it through two-dimensional images [13,14].
The third aspect of our analysis is related to the most important feature of robots that
distinguishes them from all other learning agents (software-based).
Robots are embodied agents, that is they have a physical "body", and are subject to
physical constraints, such as the maximum speed of motion or maximum range of perception.
Therefore, in ML for robots analysis of the spatio-temporal dependencies in data is very important [15].
Robots support advanced learning methods thanks to the possibility of interaction
with the environment - a simple example is active vision with moving camera, a much more
complex one is manipulation with active testing of the behavior of objects (repositioning, pushing) [16].
At the end of the lecture, in the context of specific needs and limitations
characteristic to the applications of ML in robotics, new concepts of machine learning
(e.g. deep reinforcement learning [17], interactive perception [18]) are presented.
The lecture is summarized with a brief discussion of the most important challenges and
open problems of ML applied to embodied agents.
References
[1] R. Fikes, P. E. Hart, N. J. Nilsson, "Learning and executing generalized robot plans". Artificial Intelligence, 3(4), 1972.
[2] D. Belter, P. Łabęcki, P. Skrzypczynski, "Adaptive motion planning for autonomous rough terrain traversal with a walking robot". Journal of Field Robotics, 33(3), 2016.
[3] S. Tonneau et al.. "An efficient acyclic contact planner for multiped robots". IEEE Transactions on Robotics, 34(3), 2018.
[4] S. Thrun, "Learning metric-topological maps for indoor mobile robot navigation". Artificial Intelligence, 99(1), 1998.
[5] C. Cadena et al., "Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age", IEEE Transactions on Robotics , 32(6), 2016.
[6] J. Kober, J. Peters, "Reinforcement learning in robotics: A survey", In: Learning Motor Skills. STAR, Vol 97, Springer, 2014.
[7] C. Eppner et al., "Lessons from the Amazon Picking Challenge: Four aspects of building robotic systems",
International Joint Conference on Artificial Intelligence, Melbourne, 2017.
[8] W. Scheirer et al.,, "Toward open set recognition". IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(7), 2013.
[9] G. Csurka. "Domain adaptation for visual applications: A comprehensive survey", arXiv preprint, 2017.
[10] Y. Gal, Z. Ghahramani. "Dropout as a Bayesian approximation: Representing model uncertainty in deep learning", International Conference on Machine Learning, 2016.
[11] F. Dayoub, N. Sunderhauf, P. Corke, "Episode-based, active learning with Bayesian neural networs". CVPR Workshop on Deep Learning for Robotic Vision, 2017.
[12] D. Lin, S. Fidler, R. Urtasun, "Holistic scene understanding for 3D object detection with RGBD cameras". IEEE International Conference on Computer Vision, 2013.
[13] X. Yan, J. Yang, E. Yumer, Y. Guo, H. Lee, "Perspective transformer nets: Learning single-view 3D object reconstruction without 3D supervision". NIPS, 2016.
[14] S. Pillai, J. Leonard, "Monocular SLAM supported object recognition", Robotics: Science and Systems, 2015.
[15] N. Atanasov, B. Sankaran, J. Le Ny, G. J. Pappas, K. Daniilidis, "Nonmyopic view planning for active
object classification and pose estimation". IEEE Transactions on Robotics, 30(5), 2014.
[16] D. Holz et al., "Active recognition and manipulation for mobile robot bin picking". In: Gearing Up and Accelerating Crossfertilization between Academic and Industrial Robotics Research in Europe, STAR Vol. 94, Springer, 2014.
[17] T. P. Lillicrap et al., "Continuous control with deep reinforcement learning". CoRR, abs/1509.02971, 2015
[18] J. Bohg et al., "Interactive perception: Leveraging action in perception and perception in action", IEEE Transactions on Robotics, 33(6), 2017.
[19] S. Wachter, B. Mittelstadt, L. Floridi, "Transparent, explainable, and accountable AI for robotics", Science Robotics, 2(6), 2017.
[20] D. Belter, P. Skrzypczynski, "A biologically inspired approach to feasible gait learning for a hexapod robot", International Journal of Applied Mathematics and Computer Science, 20(1), 2010
[21] L. Fei-Fei, R. Fergus, P. Perona, "One-shot learning of object categories". IEEE Trans. Pattern Analysis and Machine Intelligence, 28(4), 2006.
[22] Y. Guo et al., "Zero-shot learning with transferred samples", IEEE Transactions on Image Processing, 26(7), 2017.
[23] M. Kopicki et al., "One shot learning and generation of dexterous grasps for novel objects". International Journal of Robotics Research, 35(8), 2016.
Piotr Skrzypczyński (M:1998) received the Ph.D. and D.Sc. degrees in Robotics from Poznan University of Technology (PUT) in 1997 and 2007, respectively.
Since 2010 he is an associate professor at the Institute of Control, Robotics and Information Engineering (ICRIE) of PUT,
and head of the Mobile Robotics Laboratory at ICRIE.
He is author or co-author of over 180 papers in robotics and computer science.
His current research interests include: autonomous mobile robots, simultaneous localization and mapping,
multisensor fusion, machine learning, and computational intelligence in robotics.
GUEST SPEAKERS - TUTORIALS GIVEN ON IEEE SPA 2018
“Vision for Vision – Deep Learning in Retinal Image Analysis”
Prof. Bart M. ter Haar Romeny, PhD
Department of Biomedical Engineering
Eindhoven University of Technology
The Netherlands
Automated, fast and large-scale computer-aided diagnosis of medical images has become reality. The greatest breakthrough is Deep Learning. It already has huge impact in self-driving cars, industrial product inspection, surveillance, robotics and translation services, and in the medical arena it is outperforming human experts already in many domains.
However, it is still largely a black box. What can we learn from recent insights in the functionality, nanometer-scale connectivity and self-organization of the human visual brain? We will discuss several recent breakthroughs in our understanding of visual perception and visual deep learning.
We apply these techniques in the RetinaCheck project, a large screening / early warning project for eye damage due to diabetes. In China now an alarming 11.6% of the population has developed diabetes, due to genetic factors and fast lifestyle changes. In this project large amounts of retinal fundus images are acquired, and the e-cloud deep learning system successfully learns to identify early biomarkers of retinal disease.
The circle is round: we can prevent blindness by learning from the visual system: vision for vision.
Bart Romeny (1952) is professor in Biomedical Image Analysis (BMIA) at Eindhoven University of Technology, the Netherlands.
1979 MSc in Applied Physics, Delft University of Technology, NL.
1983 PhD in Physics and Life Sciences, Utrecht University, NL.
1989 Associate prof. Medical Imaging, Utrecht University NL.
2001 Professor Biomedical Image Analysis, Eindhoven University of Technology, NL, emeritus per 21-12-2017.
2013 Professor Biomedical Image Analysis, Northeastern University, Shenyang, China.
His research interests focus on automated computer-aided diagnosis, and quantitative medical image analysis, using brain-inspired computing and brain network modeling. He pioneered the exploitation of multi-scale differential geometry in medical image analysis. His interactive tutorial book is used worldwide. He currently leads the RetinaCheck project, a large screening project for early warning for diabetes and diabetic retinopathy in Liaoning Province. He has developed many sophisticated retinal image analysis applications with his team, in close collaboration with clinical partners and industry.
He (co-)authored over 230 scientific papers, 14418 citations, h-index 42. He is reviewer and/or
associate editor for a range of journals and conferences, and a frequent keynote speaker at
conferences and summer schools.
He was president of the Dutch Society for Clinical Physics, president of the Dutch Society of Biophysics and Biomedical Engineering. He is currently president of the Dutch Society for Pattern Recognition and Image Processing, and board member of the International Association for Pattern Recognition (IAPR).
He is Fellow of the European Alliance for Medical and Biological Engineering & Science (EAMBES), and senior member of IEEE. He is recipient of the Chinese Liaoning Friendship Award in 2014. He is an enthusiastic and awarded teacher.
“Migrating Electronic Systems from Fault Tolerant Computing to Error Resilience”
Prof. Heinrich Theodor Vierhaus
Computer Engineering Group
Brandenburg University of Technology Cottbus
Germany
Fault tolerant computing is a branch of technology which has developed continuously over decades since the late 1940. Application was limited to areas such as ultra-reliable computers for banks, space flight, aviation, and nuclear power stations. By that time, the extra hardware needed was a real problem that had to be accepted. Overhead often was beyond triplication, whereby extra power was acceptable. Only since about the 1990s, electronic “embedded” sub-system have found their way into new applications such automotive systems and industrial control. Typically, a robust type of electronics was employed which stayed away from smallest feature size technologies and lowest signal voltage swings. More recently, advanced features implemented by automotive electronic systems such as ultra-fast image processing towards autonomous driving strictly demand their implementation in nano-electronic technologies with a minimum feature size of 20 nm and below. Then, suddenly, there are two demands which strongly interact. ICs in nano-technologies show a rising vulnerability to disturbing influences such as particle radiation. Furthermore, the increasing stress due to scaling at constant supply voltage rather than the previous scaling at constant field strength implies a higher level on inherent stress, resulting in wear-out and shorter life times. Hence on-line fault detection and subsequent error correction becomes necessary on a wide scale. But now the extra power needed for fault tolerant computing becomes a nightmare, since extra power and extra heat promotes aging effects strongly. Research has gone two ways. First, methods of fault detection and error correction that get along with only a small extra power budget were developed. Unfortunately, they are not as powerful, robust and universally applicable as, for example, triplication plus majority vote (TMR), which consumes more than triple power. The second approach to the solution is the concept of “error resilience”. It is based on the observation that, depending on function and application of a circuit, a limited number of faults may be acceptable for some time without total system failure. If the overall systems can become aware of its own fault status, acceptance of some faults followed by a process of self-repair by re-organization during a time slot when the system is “at rest” may be a partial solution. Then only those parts of a digital system, where single or multiple bit errors will damage the function directly and critically, will need traditional “fast and hot” methods of error detection and error correction.
Heinrich Theodor Vierhaus received a diploma in electrical engineering from Ruhr-University Bochum (Germany) in 1975.
Lecturer for RF technology and electronic circuits with Dar-es-Salaam Technical College in Tanzania (East Africa) from 1975 to 1977.
Dr.-Ing. in EE from University of Siegen in 1983.
Senior researcher with GMD, the German National Research Institute for Information Technology from 1983 to 1996.
Professor for “Technische Informatik” (computer engineering) at Brandenburg University of Technology Cottbus since 1996.
He has authored or co-authored about 250 papers in the area of computer engineering, mainly related to IC test, technology, dependability and fault correction. He also contributed to three books in the area as an editor and an author. Since 2009, he has initiated and coordinated several projects on advanced education of doctoral students in the area of dependable systems, based on an international network of universities and research institutes.
“Electronic Systems and Interfaces Aiding the Visually Impaired”
Prof. Paweł Strumiłło
Medical Electronics Division
Institute of Electronics
Lodz University of Technology
Poland
Visual impairment is one of the most serious sensory disabilities. It deprives a human being of an active professional and social live. EU reports indicate that for every 1000 Europeans citizens 4 are blind or suffer from serious visual impairment and this number is predicted to increase with time due to our ageing society.
In spite of numerous, worldwide research efforts focusing on building innovative aids helping the blind no single electronic travel aid (ETA) solution has been widely accepted by the blind community. The aim of the tutorial is to apprise the current state of the art in the field of electronic interfaces aiding the blind in independent travel, navigation and access to information. Functional solutions and outcomes of recent research projects devoted to assistive technologies for the visually impaired will be presented.
Paweł Strumiłło received the MSc, PhD and DSc degrees and currently holds the position of full-time university professor at Lodz University of Technology (TUL), Poland. In 1991-1993 he was with the University of Strathclyde (under the EU Copernicus programme) where he defended his PhD thesis. His current research interests include medical electronics, processing of biosignals, soft computing methods and human-system interaction systems. He has published more than 100 frequently cited technical articles, authored one and co-authored two books. He was a principal and co-principal investigator in a number of Polish and European research projects aimed at developing ICT solutions and electronic aids for persons with physical and sensory disabilities. He received a number of prizes and awards for development of assistive technologies for the visually impaired people (in cooperation with Orange Labs). From 2015 he has been the head of the Institute of Electronics at TUL. He is the Senior Member of the IEEE and a member of Biocybernetics and Biomedical Engineering Committee of the Polish Academy of Sciences.
“Contemporary technologies and techniques for processing of human eye images”
Prof. Adam Dąbrowski
Division of Signal Processing and Electronic Systems
Institute of Automation and Robotics
Faculty of Computing
Center of Mechatronics, Biomechanics, and Nanoengineering
Poznan University of Technology
Poland
Imaging technologies and techniques of the human eye are used for both biometric and medical-diagnostic applications. Among various types of the eye images the following can be distinguished: iris images, fundus images, and various optical coherence tomography (OCT) scans. Contemporary processing approaches to all of these image types are reviewed and analyzed together with a discussion of their applications. Advanced image processing methods and algorithms, including the artificial intelligence approach, developed at the Division of Signal Processing and Electronic Systems of the Poznań University of Technology for the considered applications, are presented. The proposed solutions are characterized by a good effectiveness and accuracy in the support of appropriate biometric and clinical decisions.
Adam Dąbrowski received a Ph.D. in Electrical Engineering (Electronics) from the Poznan University of Technology, Poznan, Poland in 1982. In 1989 he received the Habilitation degree in Telecommunications from the same university. Since 1997 he is a full professor in digital signal processing at the Faculty of Computing, Poznan University of Technology, Poland, and Chief of the Division of Signal Processing and Electronics Systems. He was also professor at the Adam Mickiewicz University, Poznan, Poland, Technische Universität Berlin, Germany, Universität Kaiserslautern, Germany, and visiting professor at the Eidgenossische Technische Hochschule Zürich, Switzerland, Katholieke Universiteit Leuven, Belgium and Ruhr-Universität Bochum, Germany. He was a Humboldt Foundation fellow at the Ruhr-Universität Bochum, Germany (1984-1986).
His scientific interests concentrate on: digital signal processing (digital filters, signal separation, multidimensional systems, wavelet transformation), processing of images, video and audio, multimedia and intelligent vision systems, biometrics, and on processor architectures. He is author or co-author of 5 books and over 500 scientific and technical publications. Among them he is one of the co-authors of "The Computer Engineering Handbook" (first edition in 2002, second edition in 2008) bestseller and most frequently cited book of the CRC Press, Boca Raton, USA.
GUEST SPEAKERS - TUTORIALS GIVEN ON IEEE SPA 2017
“Probabilistic and Unsupervised Machine Learning for Auditory Data and Pattern Recognition”
Prof. Jörg Lücke
Cluster of Excellence Hearing4all and Dept for Medical Physics and Acoustics
Carl von Ossietzky University Oldenburg
Germany
I will introduce and discuss probabilistic data models and their applications to auditory data and to general pattern recognition tasks. The introduction will reflect the view that powerful data models are able to extract the true compositional nature of data, which allows for a decomposition into their structural primitives.
Probabilistic sparse coding and probabilistic version of non-negative matrix factorization (NMF) will be the first concrete models that are introduced and discussed. The state-of-the-art of these models will then be used to point to recent generalization directions. Two of these, translation invariant versions and deep generalizations, will be discussed in more detail. I will work out the benefits and challenges such novel approaches face, and I will discuss their crucial differences compared to supervised deep neural networks.
Finally, I briefly discuss semi-supervised approaches, a field where modern unsupervised and modern supervised Machine Learning algorithms come together, compete and where they are combined.
Jörg Lücke since 2013 is an Associate Professor of Machine Learning in Cluster of Excellence Hearing4all and Dept for Medical Physics and Acoustics, Carl von Ossietzky University Oldenburg, Germany and a Guest Professor and Principal Investigator in Dept of Software Engineering and Theoretical Computer Science, Technical University of Berlin, Germany.
In 2008 - 2013 was a Junior Research Group Leader (Computational Neuroscience and Machine Learning),
Frankfurt Institute for Advanced Studies and Dept of Physics,
Goethe-University Frankfurt am Main, Germany.
In 2005 - 2007 was a Senior Research Fellow,
Gatsby Computational Neuroscience Unit,
University College London (UCL), UK.
In 2001 - 2005 was a Research Associate in
Institute for Neuroinformatics,
Ruhr-University Bochum, Germany.
“Forward Error Correction in Wireless Communication Systems for Industrial Applications”
Prof. Petr Pfeifer
and
Prof. Heinrich Theodor Vierhaus
Computer Engineering Group
Brandenburg University of Technology Cottbus-Senftenberg
Germany
Industrial manufacturing is more and more based groups of robots in production cells. The robots consist of moving, bending and rotating arms with multiple joints. Cables that connect sections of robots undergo heavy stress from stretching and twisting, resulting in wear-out and failure. Replacing cables on robots by wireless communication therefore is an alternative that has been investigated for some time. Unfortunately, communication channels in industrial environments suffer from some adversary effect. First, standard industrial communication networks work on rigid time frames which limit allowed latencies in communication systems considerably. Second, multiple path propagation and destructive interferences make such communication channels sensitive to fading problems. Therefore forward error correction (FEC) that can compensate massive variations of signal strength becomes a must. On the other hand, forward error correction using known methods such as BCH codes, Reed-Solomon codes, turbo codes and low-density parity checks (LDPC) is not very fast by nature. Codes for single effort correction and double error detection (SEC-DED-codes) such as Hamming code and Hsiao code are fast, but they are not powerful enough to correct multiple bit errors or restore missing symbols, unless they are applied in a step-wise approximation.
PENCA (programmable encoding architecture) is a new approach in multiple error detection and correction which is, at present based on BCH codes, reasonably fast by parallel hardware. Furthermore, it allows for adaptive error correction, based on the quality of the channel, therefore providing a better overhead / performance ratio than methods that are based on a fixed number of allowed error bits in a symbol, tailored to handle worst-case conditions.
PENCA is currently becoming part of an industrial communication systems developed in the ParSeC project, which is a cooperative effort of industries, universities and research institutes, funded by the German Ministry of Research and Education (BMBF).
Petr Pfeifer has received his Ing. (Msc) in Measurement and Instrumentation from Czech Technical University in Prague in 2001. Then, he was employed in global companies like STMicroelectronics or Tyco International in senior R&D and management positions. He received an MSc in economics and management and Senior Executive MBA degree from The Nottingham Trent University. In 2015, he has received his Ph.D. in technical cybernetics, reliability of nanoscale microelectronic devices from Technical university of Liberec. He is employed by Brandenburg University of Technology (BTU Cottbus-Senftenberg) since 2015. He works in research, design and development of advanced industrial systems, and his interest and professional work cover areas from design and manufacturing of digital VLSI ASIC, advanced systems using field programmable gate arrays and complex programmable logic devices, measurement and control, signal processing, communication, industrial, automotive and safety systems, and reliability aspects of dependable systems using modern submicrometer technologies.
Heinrich Theodor Vierhaus received a diploma degree in electrical engineering from Ruhr-University Bochum (Germany) in 1975 and a doctorate in EE from the University of Siegen in 1983. From 1983 to 1996 he was a senior researcher with GMD, the German national research institute for information technology. Since 1996 he has been a full professor of computer engineering at Brandenburg University of Technology, since 2013 re-founded as BTU Cottbus-Senftenberg.
He has authored more than 100 papers in the area of test, testable design and fault tolerant computing, and he was the General Chair of the IEEE DDES 2011 symposium in Cottbus. He is also the coordinator of the East-Central European network on “Dependable Cyber Physical Systems”, linking 5 universities in 4 countries.
“Test signals used in electroacoustics and speech technology”
Prof. Andrzej Dobrucki
and
Prof. Stefan Brachmański
Faculty of Electronics
Wroclaw University of Technology
Wroclaw, Poland
The presentation consists of two parts. In the first one, the typical signals applied for testing of electro-acoustical devices are presented. These signals are e.g: harmonic signal, slowly and quickly tuned sinusoid, impulses, Gaussian noise, Maximum Length Sequence, Golay’s Complementary Sequences. The statistical and spectral properties of these signals are described. The methods of analysis of stationary and nonstationary signals are presented as well. In the second part, the signals used for the evaluation of quality of coded and transmitted speech are presented. For this aim, the natural or artificial speech signals are used. The test material can consist of units having semantic meaning as well as of nonsense syllables or words. For the Polish language, the Polish Standard provides a set of logatome lists. American ANSI norms include, among others, rhyme tests. The International Telecommunication Union (ITU-T) guidelines specify the requirements for the test signal base for telecommunications applications. This recommendation presents a set of test signals of various complexity with many typical speech parameters. These signals are intended for both subjective and objective evaluation of the quality of speech transmission.
Andrzej Dobrucki received the M.Sc. degree in 1971 from the faculty of Electronics of Wroclaw University of Technology, and began his scientific career in electroacoustics at the Institute of Telecommunications and Acoustics. In 1977 he received the Ph.D. degree for a dissertation on vibration and sound radiation of conical shells. In 1993 he received the D.Sc. degree. In 2007 he received the title of full professor from hands of President of Poland. His research interests are in the construction and measurement of electroacoustical transducers, the numerical modeling of acoustic fields, vibrations in mechanical structures and digital processing of audio signals. He is consultant for several companies manufacturing electroacoustical transducers.
He published above 200 scientific works: 5 books, 49 papers in journals, 10 chapters in books, and presentations at the international and local scientific conferences. He is also an author of 6 patents. He was the supervisor in 15 PhD. projects. Prof. Dobrucki is the reviewer in many scientific journals, e.g. Physical Review, Journal of the Acoustical Society of America, Archives of Acoustics, IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control.
Andrzej Dobrucki is the co-founder and active member of Polish Section of the Audio Engineering Society (AES). In years 1995-97 and 2003-2007 he held the post of Chairman of the AES Polish Section. In 2007 during the Convention in Vienna, prof. Andrzej Dobrucki was honored with the “Fellowship Award” of the AES. He is also member of the Polish Acoustical Society and he holds the post of President of the Society since 2014.
Andrzej Dobrucki is an experienced academic teacher. He gives the lectures “Physical Acoustics”, “Electroacoustics”, “Electroacoustic Transducers” and “Acoustic Measurements” for the students of Wroclaw University of Technology. For many years he gives the lectures “Modern technologies in hearing aids” for students of the University Paris 12 Val de Marne.
Stefan Brachmański is an Assistant Professor of Speech Communication at the Wroclaw University of Technology, Poland. Graduated and obtained PhD degree in the Wroclaw University of Technology. His research interests includes speech transmission quality evaluation, speech coding, speech enhancement, forensic acoustics and audio restoration. He is a co-author of Polish Standard: Methods for Measurements of Logatom Intelligibility in Analog Communication Systems. He is the author of Project Polish Standard: Methods for Measurements of Logatom Intelligibility in Digital Communication Systems. He is the author and co-author of 179 papers connected with the speech technology and asessment of speech quality He is a member of the Audio Engineering Society, Polish Acoustical Society, Polish Phonetics Association, European Acoustics Association. In the current cadency he is also the Vice-Dean in Faculty of Electronics.
“Image and Video Processing with Tensor Methods”
Prof. Bogusław Cyganek
Department of Electronics
Faculty of Computer Science, Electronics and Telecommunications
AGH University of Science and Technology
Krakow, Poland
Classical methods for processing and analysis of multidimensional signals – such as color videos and hyperspectral images – do not exploit full information contained in inner their factors. On the other hand, recently developed tensor based methods allow for data representation and analysis which directly account for data multidimensionality. Examples can be found in many applications such as face recognition, image synthesis, video analysis, surveillance systems, sensor networks, data stream analysis, marketing and medical data analysis, to name a few.
This talk will be focused on presentation of the basic ideas, as well as recent achievements, in the domain of tensor based signal processing. A systematic overview of tensor data representation, tensor decompositions, as well as pattern recognition with tensors will be presented. Practical aspects and tensor implementation issues will be also discussed.
Bogusław Cyganek received his M.Sc. degree in electronics in 1993, and then M.Sc. in computer science in 1996, from the AGH University of Science and Technology, Krakow, Poland. He obtained his Ph.D. degree cum laude in 2001 with a thesis on correlation of stereo images, and D.Sc. degree in 2011 with a thesis on methods and algorithms of object recognition in digital images.
During recent years dr. Bogusław Cyganek cooperated with many scientific and industrial partners such as Glasgow University Scotland UK, DLR Germany, and Surrey University UK, as well as Nisus Writer, USA, Compression Techniques, USA, Pandora Int., UK, and The Polished Group, Poland. He is an associated professor at the Department of Electronics of the AGH University of Science and Technology, Poland, as well as a visiting professor to the Wroclaw Technical University. His research interests include computer vision, pattern recognition, data mining, as well as development of embedded systems. He is an author or a co-author of over a hundred of conference and journal papers, as well as books with the latest “Object Detection and Recognition in Digital Images: Theory and Practice” published by Wiley in 2013. Dr. Cyganek is a member of the IEEE, IAPR and SPIE.
GUEST SPEAKERS - TUTORIALS GIVEN ON IEEE SPA 2016
“High efficiency video coding”
Prof. Kamisetty Ramamohan Rao
Electrical Engineering Department
The University of Texas at Arlington
Texas, U.S.
In the family of video coding standards, HEVC has the promise and potential to replace/supplement all the existing standards (MPEG and H.26x series including H.264/AVC). While the complexity of the HEVC encoder is several times that of the H.264/AVC, the decoder complexity is within the range of the latter. Researchers are exploring about reducing the HEVC encoder complexity . Kim et al have shown that motion estimation (ME) occupies 77-81% of HEVC encoder implementation. Hence the focus has been in reducing the ME complexity. Several researchers have implemented performance comparison of HEVC with other standards such as H.264/AVC , MPEG-4 Part 2 visual, H.262/PEG-2 Video , H.263, and VP9 and also with image coding standards such as JPEG2000, JPEG-LS, and JPEG-XR. Several tests have shown that HEVC provides improved compression efficiency up to 50% bit rate reduction for the same subjective video quality compared to H.264/AVC . Besides addressing all current applications, HEVC is designed and developed to focus on two key issues: increased video resolution - up to 8kx4k – and increased use of parallel processing architecture. Brief description of the HEVC is provided. However for details and implementation, the reader is referred to the JCT-VC documents , overview papers , keynote speeches , tutorials , panel discussions , poster sessions , special issues , test models (TM/HM) , web/ftp site, open source software , test sequences, anchor bit streams and the latest books on HEVC . Also researchers are exploring transcoding between HEVC and other standards such as MPEG-2 and H.264. Further extensions to HEVC are scalable video coding (SVC), 3D video/multiview video coding and range extensions which include screen content coding (SCC), bit depths larger than 10 bits and color sampling of 4:2:2 and 4:4:4. SCC in general refers to computer generated objects and screen shots from computer applications (both images and videos) and may require lossless coding. Some of these extensions have been finalized by the end of 2014 (time frame for SCC is late 2016). They also provide fertile ground for R & D. Iguchi et al have already developed a hardware encoder for super hi-vision (SHV) i.e., ultra HDTV at 7680x4320 pixel resolution. Also real-time hardware implementation of HEVC encoder for 1080p HD video has been done. NHK is planning SHV experimental broadcasting in 2016. A 249-Mpixel/s HEVC video decoder chip for 4k Ultra-HD applications has already been developed . Bross et al have shown that real time software decoding of 4K (3840x2160) video with HEVC is feasible on current desktop CPUs using four CPU cores. They also state that encoding 4K video in real time on the other hand is a challenge. Multimedia research group (MRC) predicts 2 billion HEVC based devices by end of 2016.
Kamisetty Ramamohan Rao is a full professor of electrical engineering at the University of Texas at Arlington (UT Arlington). He is credited with the co-invention of discrete cosine transform (DCT), along with N. Ahmed and T. Natarajan due to their benchmark publication, “N. Ahmed, T. Natarajan, and K. R. Rao, "Discrete Cosine Transform", IEEE Trans. Computers, 90-93, Jan 1974.”
Dr. Rao received B.S. E.E from the College of Engineering, Guindy, affiliated to The University of Madras, India in 1952. In 1959, he received his M.S.E.E degree from University of Florida followed by an M.S.NuE from the University of Florida in 1960. He received the Ph. D. degree in Electrical Engineering from the University of New Mexico in 1966.
In 2011, Dr Rao reached an academic milestone, supervising his 100th Graduate Student.
K. R. Rao received the Ph. D. degree in electrical engineering from The University of New Mexico, Albuquerque in 1966. He is now working as a professor of electrical engineering in the University of Texas at Arlington, Texas. He has published (coauthored) 16 books, some of which have been translated into Chinese, Japanese, Korean and Russian. Also as ebooks and paper back (Asian) editions. He has supervised 87 Masters and 31 doctoral students. He has published extensively and conducted tutorials/workshops worldwide. He has been a consultant to academia, industry and research institutes.
“Computational models for predicting sound quality”
Prof. Brian C.J. Moore
Department of Experimental Psychology
University of Cambridge
Downing Street, Cambridge CB2 3EB, England
The quality of an audio device, such as a microphone, amplifier, or headphone, depends on how accurately the device transmits the properties of the sound source to the ear(s) of the listener. Two types of “distortion” can occur in this transmission: (1) “Linear” distortion, which may be described as a deviation of the frequency response from the “target” response; (2) Nonlinear distortion, which is characterised by frequency components in the output of the device that were not present in the input. These two forms of distortion have different perceptual effects. Their effects on sound quality can be predicted using a model of auditory processing with the following stages: (1) A filter to take into account the transmission of sound from the device to the ear of the listener; (2) A filter to simulate the effects of transmission through the middle ear; (3) An array of bandpass filters to simulate the auditory filters that exist in the cochlea of the inner ear. For predicting the perceptual effects of linear distortion, a model operating in the frequency domain can be used. For predicting the perceptual effects of nonlinear distortion, a model operating in the time domain is required, since the detailed waveforms at the outputs of the auditory filters need to be considered. The models described have been shown to give accurate predictions for a wide range of “artificial” and “real” linear and nonlinear distortions.
Brian Moore is Emeritus Professor of Auditory Perception in the University of Cambridge. His research interests are: the perception of sound in normal and impaired hearing; design of signal processing hearing aids for sensorineural hearing loss; methods for fitting hearing aids to the individual; perception of music and of musical instruments. He is a Fellow of the Royal Society, the Academy of Medical Sciences, the Acoustical Society of America, The Audio Engineering Society, and the Association for Psychological Science, and an Honorary Fellow of the Belgian Society of Audiology and the British Society of Hearing Aid Audiologists. He is President of the Association of Independent Hearing Healthcare Professionals (UK). He has written or edited 20 books and over 640 scientific papers and book chapters. He has been awarded the Littler Prize and the Littler Lecture of the British Society of Audiology, the Silver and Gold medals of the Acoustical Society of America, the first International Award in Hearing from the American Academy of Audiology, the Award of Merit from the Association for Research in Otolaryngology, the Hugh Knowles Prize for Distinguished Achievement from Northwestern University and an honorary doctorate from Adam Mickiewicz University, Poland. He is wine steward of Wolfson College, Cambridge.
“The crucial role of mathematics in circuits, systems and signal processing research and education”
Prof. Joos Vandewalle
Katholieke Universiteit Leuven
Electrical Engineering Department - ESAT, Stadius Division
Leuven, Flanders, Belgium
Over the recent years the role of mathematics in innovations for Circuits, Systems and Signal processing has increased considerably. The talk will overview the dynamical forces and their impact on the research and education. Examples will be given of mathematical methodologies for signal and image classification, data fusion, biomedical diagnostics with support vector machines, matrix and tensor decompositions. Also cryptographic algorithms are crucial in our modern society. Important lessons can be learned for research planning, dissemination and reproducibility as well as for teaching in engineering.
Joos Vandewalle obtained the electrical engineering degree and doctorate in applied sciences from KU Leuven, Belgium in 1971 and 1976. Until October 2013 he was a full professor at the Department Electrical Engineering (ESAT), Katholieke Universiteit Leuven, Belgium; head of the SCD division at ESAT, with more than 150 researchers. Since October 2013 he is a professor emeritus with assignments at KU Leuven. His present tasks include chairing the positioning test for engineering in Flanders, board member of the Flemish Academy in Brussels, chairing PhD defenses, …
He held visiting positions University of California, Berkeley and I3S CNRS Sophia Antipolis, France.
He taught courses in linear algebra, linear and nonlinear system and circuit theory, signal processing and neural networks. His research interests are in mathematical system theory and its applications in circuit theory, control, signal processing, cryptography and neural networks. He (co-)authored more than 300 international journal papers and obtained several best paper awards and research awards and in 2016 the IEEE CAS Desoer Technical Achievement Award. His publications received over 30 000 googlescholar citations. He is a Fellow of IEEE, IET, and EURASIP and member of the Academia Europaea and of the Belgian Academy of Sciences. From 2009 till 2013 he was a member of the Board of Governors of the IEEE Circuits and Systems Society. He is a member of the Fetzer Advisory Council on Engineering and Chair of the IEEE CAS Circuits and Systems Education and Outreach TC, and Chairman of the class of Technical sciences of the Belgian Academy KVAB.
http://www.ae-info.org/ae/User/Vandewalle_Joseph
GUEST SPEAKERS - TUTORIALS GIVEN ON IEEE SPA 2015
“Error Resilience in Nano-Electronic Digital Circuits and Systems”
Christian Gleichner, Prof. Heinrich T. Vierhaus
BTU Cottbus-Senftenberg, Germany
For more than 10 years, many authors have predicted dependability problems with large-scale integrated circuits, implemented in nano-technologies. Reasons are new and enhanced fault mechanisms that affect either a higher vulnerability against transient faults, caused by particle radiation, or pre-mature aging due to device degradation. More recently, power dissipation on ICs has become a major problem, also affecting reliability aspects, since most fault mechanisms are strongly enhanced by higher temperatures. The final challenge is to handle a variety of fault effects at minimum cost in extra hardware in order to enhance dependability and system life time.
The tutorial first gives an introduction into the basic problems. Next methods for the detection and correction of short transient faults are shown. Then methods that can detect and correct delay faults are presented, followed by new architectures that may handle transient faults and delay faults in combination. Built-in self repair (BISR) is presented as a method that is not capable of on-line error correction, but may be helpful for life-time extension. Finally there is a comparison of such technologies in terms of requirements for extra time or extra power.
Heinrich Theodor Vierhaus received a diploma degree in electrical engineering from Ruhr-University Bochum (Germany) in 1975 and a doctorate in EE from the University of Siegen in 1983. From 1983 to 1996 he was a senior researcher with GMD, the German national research institute for information technology. Since 1996 he has been a full professor of computer engineering at Brandenburg University of Technology, since 2013 re-founded as BTU Cottbus-Senftenberg.
He has authored more than 100 papers in the area of test, testable design and fault tolerant computing, and he was the General Chair of the IEEE DDES 2011 symposium in Cottbus. He is also the coordinator of the East-Central European network on “Dependable Cyber Physical Systems”, linking 5 universities in 4 countries.
Christian Gleichner received a Diploma in Computer Science from BTU Cottbus in 2009. Then he worked as a junior researcher at BTU Cottbus in industrial projects with a focus on test technology for embedded processors in automotive applications. He received his doctorate (Dr.-Ing.) in computer engineering in 2014. Since then he has been a senior researcher at BTU Cottbus-Senftenberg with a research focus on test technology, specifically for systems in the field of application.
“Cross-domain applications of multimodal human-computer interfaces”
Prof. Andrzej Czyżewski
Gdansk University of Technology
ETI Faculty, Multimedia System Department
Gdansk, Poland
Developed multimodal interfaces for education applications and for disabled people are presented, including interactive electronic whiteboard based on video image analysis, application for controlling computers with mouth gestures and audio interface for speech stretching for hearing impaired and stuttering people and intelligent pen allowing for diagnosing and ameliorating developmental dyslexia. The eye-gaze tracking system named “CyberEye” is presented including the method of analysis of visual activity of patients remaining in vegetative state helping to assessment of their awareness. The scent emitting multimodal computer interface is also discussed. A new approach to diagnosing Parkinson’s disease is shown which is used to evaluate motor and behavioral symptoms of the neurodegenerative disease. The paper is concluded with some more topics & demonstrations of technologies developed for applications to intelligent surveillance systems, environment monitoring systems and automated solutions for enhancement of degraded audio recordings.
Prof. Andrzej Czyżewski is a native of Gdansk, Poland. He received his M.Sc. degree in Sound Engineering from the Gdansk University of Technology, Poland in 1982, his Ph.D. degree in 1987 and his D.Sc. degree in 1992 from the Cracow Academy of Mining and Metallurgy in Poland. He joined the staff of the Sound Engineering Department of the Gdansk University of Technology in 1984. In December 1999 Mr. President of Poland granted him the title of Professor. In 2002 the Senate of his University approved him to the position of Full Professor. He is an author of more than 500 research papers published in international journals or presented in congresses & conferences around the World. He is also author of 10 Polish patents in the domain of computer science and 6 international patents. Prof. Czyżewski serves as Head of the Multimedia Systems Department of Gdansk University of Technology; He holds Fellowship of the Audio Engineering Society and he is a member of: IEEE, International Rough Set Society, and others. He acted as a leader of more than 30 domestic research grant projects and 7 international projects. He together with his research team won around 40 domestic and international prizes and medals for their achievements in engineering science, including the First Prize of the Polish Prime Minister received twice (in 2000 and 2015).
“Thermal camera cores - present and future”
MSc Harald Dingemans
Managing Director of Linc Polska
Poznan, Poland
and
MSc Jakub Sobek
Certified MOBOTIX and FLIR Trainer Linc Polska
Poznan, Poland
The military has used infrared thermal imaging cameras for many years as a way to see better on the battlefield at night and through smoke. The cores of the thermal cameras have been coming down in prices over the past few years. Now this technology can be used in a wide variety of applications. Thermal camera's cores are designed for easy and efficient integration into higher level assemblies and platforms. This tutorial is made to show present possibilities of fascinating thermal imaging and what is the future of this technology.
Harald Dingemans is an absolvent of the Hogeschool van Utrecht. He is a specialist in technical security, especially in Visual Surveilance Systems. He is a visionary and has a longstanding practice. He also takes part in an international ventures. There is nothing that is impossible for him to achieve and he can easily put the theory into practice. Above all, Mr. Dingemans often helps young people to realize their ideas and achieve their goals. Currently, he is the Managing Director of Linc Polska and the Chairman of the board of Smart-i. He actively co-operates with Polish Chamber of Security (PIO), Polish Engineers and Technicians Association of Technical Security and Security Management “POLALARM” and Polish Chamber of Alarm Systems (PISA). Pragmatism and innovation - those are the words that best describe the personality of Mr. Dingemans.
Jakub Sobek is an absolvent of Poznan University of at Faculty of Computing Science. His master thesis was written at Division of Signal Processing and Electronic Systems and it was about implementation of SWIFT algorithm for fingerprint recognition. After studies he has started to work for company Linc Polska as a technical trainer. At 2012 he passed the examination process organized by the MOBOTIX company and from that moment he is the first and the only one Certified MOBOTIX Trainer in Poland. MOBOTIX is a producer of high resolution megapixel IP cameras. Jakub Sobek is also Certified Trainer for FLIR Security Products.
GUEST SPEAKERS - TUTORIALS GIVEN ON IEEE SPA 2014
“Signal Processing for Big Data”
Prof. Georgios B. Giannakis
University of Minnesota
USA
We live in an era of data deluge. Pervasive sensors collect massive amounts of information on every bit of our lives, churning out enormous streams of raw data in various formats. Mining information from unprecedented volumes of data promises to limit the spread of epidemics and diseases, identify trends in financial markets, learn the dynamics of emergent social-computational systems, and also protect critical infrastructure including the smart grid and the Internet’s backbone network. While Big Data can be definitely perceived as a big blessing, big challenges also arise with large-scale datasets. The sheer volume of data makes it often impossible to run analytics using a central processor and storage, and distributed processing with parallelized multi-processors is preferred while the data themselves are stored in the cloud. As many sources continuously generate data in real time, analytics must often be performed “on-the-fly” and without an opportunity to revisit past entries. Due to their disparate origins, massive datasets are noisy, incomplete, prone to outliers, and vulnerable to cyber-attacks. These effects are amplified if the acquisition and transportation cost per datum is driven to a minimum. Overall, Big Data present challenges in which resources such as time, space, and energy, are intertwined in complex ways with data resources. Given these challenges, ample signal processing opportunities arise. This tutorial lecture outlines ongoing research in novel models applicable to a wide range of Big Data analytics problems, as well as algorithms to handle the practical challenges, while revealing fundamental limits and insights on the mathematical trade-offs involved.
Georgios B. Giannakis (Fellow’97) received his Diploma in Electrical Engr. from the Ntl. Tech. Univ. of Athens, Greece, 1981. From 1982 to 1986 he was with the Univ. of Southern California (USC), where he received his MSc. in Electrical Engineering, 1983, MSc. in Mathematics, 1986, and Ph.D. in Electrical Engr., 1986. Since 1999 he has been a professor with the Univ. of Minnesota, where he now holds an ADC Chair in Wireless Telecommunications in the ECE Department, and serves as director of the Digital Technology Center. His general interests span the areas of communications, networking and statistical signal processing – subjects on which he has published more than 365 journal papers, 615 conference papers, 20 book chapters, two edited books and two research monographs (h-index 108). Current research focuses on sparsity and big data analytics, wireless cognitive radios, mobile ad hoc networks, renewable energy, power grid, gene-regulatory, and social networks. He is the (co-) inventor of 22 patents issued, and the (co-) recipient of 8 best paper awards from the IEEE Signal Processing (SP) and Communications Societies, including the G. Marconi Prize Paper Award in Wireless Communications. He also received Technical Achievement Awards from the SP Society (2000), from EURASIP (2005), a Young Faculty Teaching Award, and the G. W. Taylor Award for Distinguished Research from the University of Minnesota. He is a Fellow of EURASIP, and has served the IEEE in a number of posts, including that of a Distinguished Lecturer for the IEEE-SP Society.
“Wireless 100Gb/s and beyond”: Challenges and approaches to achieve ultra-high speed wireless communications.
Prof. Dr.-Ing. Rolf Kraemer
Lehrstuhl für drahtlose Kommunikation
BTU-Cottbus-Senftenberg
Cottbus
and
Department “Wireless Systems”
IHP, Innovation for High Performance
Leibniz Institut für innovative Mikroelektronik
Frankfurt/Oder
Germany
Ultra-high speed wireless communication will enable next generation internet access with an unbelievable comfort and service. According to predictions the requirement for speed will increase according to the ITRS roadmap of NVM storage. This leads to wireless multi-gigabit access already in the next years for short range networks and to cellular systems in the next 5-10 years.
The talk will highlight the challenges in developing ultra-high speed wireless systems of 100Gb/s and beyond. The different sub-systems will be shortly analyzed like antennas, RF-frontend, baseband-processor and MAC-processor. The German research community has launched a special priority program to investigate different wireless 100 Gb/s approaches. Thus, the talk will briefly address the different individual project approaches. Special focus will be the discussion of a potential paradigm shift towards more analog signal processing. This might be interesting since it promises a reduction in circuit complexity and power consumption.
Prof. Dr.-Ing. Rolf Kraemer received his diploma and Ph.D. from RWTH Aachen in electrical engineering 1979 and 1985. The topic of his thesis was on “Portability Aspects of System Software”. He joined the Philips Research Laboratories in 1985 where he worked on distributed systems, communication systems, wireless high speed communication, and embedded control- and management-software in different positions and responsibilities until 1998 in Hamburg and Aachen. In 1998 he became a Professor at Technical University of Cottbus with the joined appointment of the department head of wireless systems at the IHP in Frankfurt (Oder). His research interests are in the area of ultra-high speed wireless system as well as ultra-low power wireless communications. To this end he also explores different design methods like GALS etc. He is also still very much interested in distributed software design and protocol structures. His teaching obligations address the area of distributed operating systems, mobile communication, and sensor networks. In the IHP he leads a research department with more than 50 researchers in topics of high speed wireless communication systems, sensor networks and middleware systems, as well as dependable systems. In 2012 he was granted the DFG priority program “Wireless 100Gb/s and beyond” and he has been nominated as speaker of this program. Prof. Kraemer is founder of 2 start-up companies and works as business angel since 2009.
“New Ways of Doctoral Education in the Age of Cyber Physical Systems”
Prof. Dr.-Ing. Heinrich Theodor Vierhaus
Computer Eng., BTU Cottbus-Senftenberg, Germany
Technical education has to meet new challenges with the arrival of large-scale cyber physical systems. Since such systems are real-time critical, distributed, often safety-critical, and heterogeneous by nature, their design and their control in everyday use creates challenges which must be mastered by qualified engineers. On the other hand, existing schemes of technical education by far follow traditional patterns of analog and digital hardware designers with at least some notion of timing on one side, and software designers, deeply embedded in their cyber space with little relation to real time challenges, on the other hand. This paper shows some recent efforts in international higher education with a strong devotion to close existing gaps.
Heinrich Theodor Vierhaus:
Diploma in Electrical Engineering from Ruhr-Universität Bochum 1975
Lecturer for electronics and microwave engineering at Dar-es-Salaam Technical College (1975-1977)
Research assistant at University of Siegen (1978-1983)
Doctorate (Dr.-Ing.) in EE from University of Siegen in 1983
Senior researcher at “German National Research Institute for Information Technology” (GMD) (1983-1996)
Since 1996 full Professor for Computer Engineering at Brandenburg University of Technology (BTU Cottbus)
Special interest: Innovative concepts of education for doctoral students with international partners, preferably next door (Poznan, Liberec, Tallinn).
“Compositional models for signal processing - perspectives from audio processing”
Prof. Tuomas Virtanen
Department of Signal Processing
Tampere University of Technology (TUT)
Finland
Many classes of data are composed as purely additive combinations of latent parts that do not result in subtraction or diminishment of the parts. Compositional models such as non-negative matrix factorization can effectively learn these latent structures of the data. Even though such models most naturally applies to non-signal data such as counts of populations, they can be employed to explain other forms of data as well. On signal processing, these models can be used to give more interpretable representations than what is obtained with many established signal processing methods. Therefore, during the last few years such models have provided new paradigms to solve old standing signal processing problems, e.g. source separation and robust pattern recognition. For example in the field of audio processing where we often deal with mixtures of sounds, the models have been used as parts of processing systems to advance the state of the art on many problems, for example on the analysis of polyphonic music and recognition of noisy speech. In this presentation we show how compositional models can be powerful tools for signal processing, providing highly interpretable representations, and enabling diverse applications such as signal analysis, recognition, manipulation, and enhancement. We will use several examples from the field of audio processing to demonstrate the effectiveness of the models.
Tuomas Virtanen is an Academy Research Fellow and an adjunct professor at Department of Signal Processing, Tampere University of Technology (TUT), Finland. He received the M.Sc. and Doctor of Science degrees in information technology from TUT in 2001 and 2006, respectively. He has also been working as a research associate at Cambridge University Engineering Department, UK. He is known for his pioneering work on single-channel sound source separation using non-negative matrix factorization based techniques, and their application to noise-robust speech recognition, music content analysis and audio event detection. In addition to the above topics, his research interests include content analysis of audio signals in general and machine learning. He has authored more than 100 scientific publications on the above topics. He has received the IEEE Signal Processing Society 2012 best paper award for his article "Monaural Sound Source Separation by Nonnegative Matrix Factorization with Temporal Continuity and Sparseness Criteria" as well as two other best paper awards.
GUEST SPEAKERS - TUTORIALS GIVEN ON IEEE SPA 2013
“Video coding standards: AVS China, H.264/MPEG-4 PART 10, HEVC, VP9, DIRAC and VC-1”
Prof. Kamisetty Ramamohan Rao
IEEE Fellow, E.E. Dept., UTA, Arlington, Texas, USA
Video coding standards: AVS China, H.264/MPEG-4 PART 10, HEVC (high efficiency video coding a.k.a. H.265), VP9 (Google), DIRAC (BBC) and VC-1 (SMPTE) are presented. HEVC is the latest standard developed by ITU-T and ISO/IEC.
Encoder/decoder functionalities along with advantages/applications are discussed.
Open source software, ftp/web sites, standards documents, review papers, conformance bit streams, test sequences, performance (comparison/evaluation) metrics, keynote speeches, tutorials, reflectors and related resources are listed. Research projects at M.S. and Ph.D. levels are outlined.
Kamisetty Ramamohan Rao received the Ph. D. degree in electrical engineering from The University of New Mexico, Albuquerque in 1966. He is now working as a professor of electrical engineering in the University of Texas at Arlington, Texas. He has published (coauthored) 16 books, some of which have been translated into Chinese, Japanese, Korean and Russian. Also as ebooks and paper back (Asian) editions. He has supervised 87 Masters and 31 doctoral students. He has published extensively and conducted tutorials/workshops worldwide. He has been a consultant to academia, industry and research institutes.
“Visually impaired mobility and ICT supports”
Edwige E. Pissaloux
ISIR/Paris-Sorbonne & CNRS/UMR 7222, France
This talk will propose a computational approach to the concept of mobility and will discuss its existing and possible ICT implementations.
Based on different psycho-cognitive human spaces introduced by B. Tversky (Stanford 2004) the mobility is considered as an interaction task, between these spaces and human beings, which has emerged during human evolution and adaptation to the environment. Therefore, different elementary functions, subtending the mobility and proper to different spaces, are identified; these functions, which should be evolvable and suitable for constant human-environment co-evolution, could be integrated in new ICT holistic technological mobility assistance. A state-of-art of some of existing and on-going academic ICT projects, which target the appropriate support of the mobility concept will be provided.
Edwige Pissaloux is a full professor and researcher at the ISIR (Institute of Inteligent Systems and Robotics) at Paris-Sorbonne University (University Paris 6 or UPMC). She works on modelling and design of vision and visual perception (cognitive) systems, and assistive devices. Recently (2010-2012), she participated in the “AsTeRICS” EU FP7 project (design of a visible spectrum intelligent gaze tracker for upper limb motor impaired people). Prof. Pissaloux is a member of several international advisory boards of universities and research institutes, and teaches in several universities (Australia, Hong-Kong, Canada). Prof. Pissaloux acts as expert for many international institutions such as European Commission, ARC/Australia, HKGRC/China, CRSNI/Canada, etc. In her free time, as violinist and violin professor, she teaches violin for visually impaired children.
“Combining Fault Tolerance and Self Repair in a Virtual TMR Scheme”
Heinrich Theodor Vierhaus
Computer Eng., BTU Cottbus, Germany
With decreasing minimum feature size, nano-electronic circuits and systems exhibit an increasing variety of defect and fault mechanisms. Their rising sensitivity to radiation- and coupling induced single and multiple event upsets is one problem, new or enhanced aging processes that may lead to early-lifetime failures pose another threat. The compensation of transient fault effects is a well explored are of science, while repair technologies that tackle permanent faults have so far found a broad acceptance only for embedded memories and for FPGA-based systems. However, specifically such methods and architectures are of great practical importance for the compensation of early-lifetime failures. The combination of fast error compensation and repair mechanisms is even more challenging, since also aspects of minimum power consumption become important in many areas of application.
Heinrich Theodor Vierhaus:
Diploma in Electrical Engineering from Ruhr-Universität Bochum 1975
Lecturer for electronics and microwave engineering at Dar-es-Salaam Technical College (1975-1977)
Research assistant at University of Siegen (1978-1983)
Doctorate (Dr.-Ing.) in EE from University of Siegen in 1983
Senior researcher at “German National Research Institute for Information Technology” (GMD) (1983-1996)
Since 1996 full Professor for Computer Engineering at Brandenburg University of Technology (BTU Cottbus)
Special interest: Innovative concepts of education for doctoral students with international partners, preferably next door (Poznan, Liberec, Tallinn).