Tel: +91-1204392517 | Mail: spin@amity.edu


Invited Talk of Spin 2016


Prof. Hui-Huang Hsu
Department of Computer Science and Information Engineering.
Tamkang University, Taiwan.

Title: Recognizing Human Behavior via Smartphone Sensory Data

Abstract: Understanding the movement or even behavior of human requires various kinds of sensory data from wearable devices or environmental sensors. Nowadays, smartphones are equipped with sensors that can serve such a purpose. The most important thing is that people carry a smartphone most of the time. Therefore, comparing to other type of sensors, the smartphone seems to be an unobtrusive sensing device for the user. In this talk, we will first introduce general concepts. We will then discuss some of the possibilities. Results of selected research projects will also be presented.


Prof. Alexander Kurganov
Tulane University,
LA, USA

Title: Central Schemes: A Powerful Black-Box Solver for Nonlinear Hyperbolic PDES

Abstract: Nonlinear hyperbolic PDEs arise in modeling a wide range of phenomena in physical, astrophysical, geophysical, meteorological, biological, chemical, financial, social and other scientific areas. Being equipped with efficient, accurate and robust numerical methods is absolutely necessary to make substantial progress in all of those fields of research. This talk will be focused on non-oscillatory central schemes, which can be used as a high-quality black-box solver for general hyperbolic systems of conservation laws. I will first briefly show their derivation, discuss some current developments, and then present several recent applications including modern network models of traffic and pedestrian flows, gas pipes, and dendritic river systems.


Prof. Hsi-Pin Ma
Department of Electrical Engineering,
National Tsing Hua University, Taiwan 30013

Title: Wireless Healthcare: Electronics and Systems

Abstract: Helath is an emerging topic for healthcare systems. Based on the infrastructure of the mobile communications systems, the healthcare system can provide much more services. In this talk, a platform for mobile healthcare system is presented. The ECG/respiration monitoring is just an example. We can use the low complexity low power wireless sensor node to record the ECG signal and continuously monitor for a long time. By mobile phone, we can use the existing 3G/WiFi network to send back the recorded ECG signals for further analysis. There is no extra deployment cost for the whole infrastructure. We also proposed some algorithms for ECG signal analysis implemented both on the mobile phones and cloud. This can let doctors have more options to provide medical services. With the possibilities of wearable sensing techniques, we can also extend the techniques to the life style applications or interdisciplinary collaborations. Some demos will be presented within the talk.


Prof. Peter Puschner
Technische Universitaet Wien Real-Time Systems Group
Vienna, Austria

Title: Time-Composable Network Interfaces

Abstract: Composing networked computer systems of highly autonomous components may lead to control conflicts in the communication system. These control conflicts can be avoided by connecting the components via temporal-firewall interfaces in combination with a time-triggered communication network. We will show the benefits of this communication strategy and discuss the two access strategies (asynchronous respectively time-synchronized) for communicating via temporal-firewall network interfaces.


Prof. Marius Pedersen
Director of The Norwegian Colour and Visual Computing Laboratory
Faculty of Computer Science and Media Technology Gj�vik University College, Norway

Title: Towards a Perceptual Image Quality Metric

Abstract: The evaluation of image quality is a field of research that has gained attention for many decades. Since subjective evaluation of image quality is time-consuming and resource demanding, there is increasing effort to obtain a objective image quality metric capable of predicting perceived image quality. In this talk we give an overview of existing image quality metrics, and the advancements in the field towards obtaining a perceptual image quality metric. We focus specifically on image quality metrics that simulate the human visual system, and how well they are able to predict perceived image quality.


Prof. Henrik Hautop Lund,
Professor Center for Playware,
Technical University of Denmark

Title: Playful Modular Technology – Play with Networks of Radio Communicating Interactive Modules

Abstract: With recent technology development, we become able to exploit robotics and modern artificial intelligence (AI) to create playware in the form of intelligent hardware and software that creates play and playful experiences for users of all ages. Such playware technology acts as a play force which inspires and motivates you to enter into a play dynamics, in which you forget about time and place, and simultaneously become highly creative and increase your skills - cognitive, physical, and social skills. The Playware ABC concept will allow you to develop life-changing solutions for anybody, anywhere, anytime through building bodies and brains to allow people to construct, combine and create. Two decades of scientific studies of such playware in the form of playful robotics, LEGO robots for kids, minimal robot systems, user-friendly, behavior-based, biomimetic, modular robotics lead Prof. Lund’s students to form the Universal Robots company, which disrupted the industrial robotics sector, and recently was sold for 285 million USD. Another example of exploiting the playful and user-friendly technology development is the modular interactive tiles system, Moto tiles (www.mototiles.com), which is designed as an alternative form of physical rehabilitation exercise to allow elderly citizens and patients to break away from monotonous training programs, and participate in an exercise that is fun and exciting, and therefore more motivating. Qualitative studies show that therapists and elderly find the training with modular interactive tiles fun and highly motivating. Further, scientific studies have shown that training with the modular tiles has a large effect on the functional abilities of the elderly. The tests of effect show that training with the Moto tiles provides improvements on a broad range of abilities including mobility, agility, balancing, strength and endurance. The playful training improves the abilities of the elderly in many areas of high importance for activities of daily living, in contrast to several other forms of training and exercise, which typically only improve a subpart of these abilities. It is shown that playful training give significant effects with substantially less training sessions than what is needed with more traditional training methods.


Prof. Raimo Kantola
Dept of Communications and Networking
Aalto University, Finland

Title:Trust and Security for 5G and the Internet

Abstract: 5G is expected to provide ultra-reliable service. At the same time 5G is the next step in the evolution of the Internet. In the Internet legitimate services can fail due to unpredictable malicious activities including Distributed Denial of Service attacks, intrusions that use viruses, Trojans and botnets. 5G needs a significant step forward in its approach to security in order to claim to provide ultra-reliable service. On the level of interactions between senders and receivers we propose a new cooperative firewall technology, called Customer Edge Switching (CES) that admits all traffic flows based on policy. We describe our work on CES and towards dynamic policies that make use of the reputation of all Internet entities such as, hosts, customer networks, DNS servers and applications. The idea of to collect and share evidence of malicious activity, aggregate it into the reputation of the different entities and disseminate the reputation values to the cooperative firewalls that can step up their controls based on the overall security situation and the reputation. We describe our approach to deployment that should be feasible one network at a time. To achieve this, we propose a Realm Gateway and a step-wise deployment of trust/reputation processing into the network.


Prof. Xavier Fernando
Director, Ryerson Communications Lab
Ryerson University, Canada

Title: Upcoming Technologies for an Interconnected Society

Abstract: Information and communication technologies (ICT) have been changing the way we live. The changes are very significant in recent times. Social media has become a part of our lives and goes way beyond being a fun accessory. It has played a key role in unraveling people power and creating collective opinions in places like Egypt and Libya. Especially, it enables likely minded people to be connected and share many things. The difference between computers, phones, cameras, televisions, audio players and even the bank machine is diminishing as a single device can perform all these tasks and much more. Our kids use more ‘i’ devices than toys. Photonic and radio technologies jointly enable anytime, anywhere, broadband wireless connectivity. Modern wireless technology provides numerous seamless services, from receiving images, video and tweets from the deep space to providing communications and tracking for underground miners. Internet of Things is expanding rapidly. The world is increasingly populated with sensors and connected devices that automatically communicate, make decisions and perform complex tasks. The power grid is getting smarter, self healing and more resilient. Autonomous, electric cars will soon be a common thing, receiving and redelivering electricity to the grid. Our homes will soon generate their own energy and be more interactive. However, there are other issues with the ICT as well. Too much reliance on ICT and automation could cause trouble when things malfunction. Management of complex systems and ensuring reliability is generally a demanding task. Privacy and security will always be an issue in a highly automated i-society. Things could get very bad during natural and manmade disasters. On another aspect, the power consumption of ICT equipment is rapidly increasing. That of networks servers was 150 GW in 2007 and expected increase by 300% by 2020. This is another topic that needs attention. In this talk, various cutting edge technologies towards the realization of i-society will be highlighted. Their limitations will also be discussed.


Prof. Afaq Ahmad
Department of Electrical & Computer Engineering, Chair, Pre-Specialization Academic Advising Unit College of Engineering
Sultan Qaboos University, Muscat, Oman

Title: Trustworthy Applications of Linear Feedback Shift Registers

Abstract: Many research areas of uses Linear Feedback Shift Registers (LFSRs) to solve the increasingly complex problems. With the pace of continuous developing of the information technology, enormous applications of LFSRs are achieved, and are successfully embedded in the systems. Some of the popular applications of LFSRs are visible in cryptography, testing, coding theory and wireless system communication. Each of the applications of LFSRs requires unique attributes and qualities. This contribution will highlight and describe various issues while LFSRs are used for different applications. In particular, the features such as less area, more efficiency, less power dissipation, low cost and more secure implementation will be discussed.

Dr. Leonardo BOCCHI
Researcher, Department of Information Engineering
Electronic and Information Bioengineering
University of Florence
Florence Area, Italy

Title: Microcirculation Systems: Current Researches and Perspectives

Abstract: The microcirculation is where the exchange of substances between the blood and the tissues takes place. The study of microcirculatory hemodynamics thus provides the key for assessing the function of the organs perfused. In the last years, it has become clear that the microcirculation may play a significant role in the pathophysiology of many diseases, and not just those like e.g. diabetes, cardiovascular disease, scleroderma, that are conventionally assessed as being of microcirculatory origin. Several methodologies have been applied, together with with numerous tests involving different stimulation: thermal, ischemic, pharmacological, particularly for microcirculation assessment in the skin. The application of mathematical models, both physical and engineering, has initiated a new era with a rapidly increasing understanding of microvascular function. The current state of the art in this field thus includes several devices with different properties and features, providing complex data that is currently analyzed accordingly to various mathematical and physical models. However, the current lack of consensus, and the need for agreed guidelines, is delaying routine clinical application of what has been discovered.

Pof. Rangaraj M. Rangayyan,
Department of Electrical and Computer Engineering
University of Calgary, Calgary, Alberta, Canada

Title: Computer-aided Diagnosis of Retinopathy of Prematurity

Abstract: The structure of the blood vessels in the retina is affected by diabetes, hypertension, arteriosclerosis, retinopathy of prematurity (RoP), and other conditions through modifications in shape, width, and tortuosity. Quantitative analysis of the architecture of the vasculature of the retina could assist in monitoring the evolution and stage of pathological processes, their effects on the visual system, and the response to treatment. Computer-aided detection, modeling, and quantitative analysis of features related to the retinal vascular architecture could assist in consistent, quantitative, and accurate assessment of pathological processes by ophthalmologists. This seminar provides details on digital image processing and pattern recognition techniques for the detection and analysis of retinal blood vessels, detection of the optic nerve head, modeling of shape for quantitative analysis of the temporal arcades, measurement of the thickness of retinal vessels, and detection of tortuous vessels. The techniques include methods for the detection of curvilinear structures, the Hough transform, Gabor filters, phase portraits, and specific algorithms for quantitative analysis of patterns of diagnostic interest. Analysis of a dataset of retinal fundus images of 19 premature infants with plus disease, a proliferative stage of RoP, and 91 premature infants without plus disease resulted in an area under the receiver operating characteristic curve of up to 0.98 using our parameter to quantify tortuosity. A graphical user interface is being developed to facilitate clinical application of the methods. The methods should assist in computer-aided diagnosis, follow up, and clinical management of premature infants possibly affected by RoP.

Prof. Radim Burget.
Signal Processing Laboratory.
Department of Telecommunications, Brno University of Technology.
Brno, Czech Republic, European Union

Title: Signal processing and automation: Trends and Challenges

Abstract: Industry 4.0 is a collective term embracing a number of contemporary automation, data exchange and manufacturing technologies. It had been defined as 'a collective term for technologies and concepts of value chain organization' which draws together Cyber-Physical Systems, the Internet of Things and the Internet of Services. Signal processing has big influence on this effort. Although many research work have been done in this area, its transfer from research laboratories into a business environment often fail very often. There are plenty of obstacles that prevents its deployment into industry. This presentation will provide overview of technologies being introduced in recent years, that has successful path from research lab into business deployment. Furthermore it will discuss complementary technologies related to the artificial intelligence the helps in industrial automation and security.

Prof. H. Vakilzadian
Department of Electrical and Computer Engineering
University of Nebraska-Lincoln
Lincoln, Nebraska, United States

Title: Challenges in Development of a Simulation-based Electrical Engineering Program

Abstract: Mathematical modeling, computational algorithms, and the science and technology of complex and data-intensive high performance computing are having an unprecedented impact on the health, security, productivity, and competitiveness of the United States. Exploitation of new capabilities, however, is only achievable when basic research on the major components of computational modeling and simulation is performed. In electrical and computer engineering, advances in computational modeling and simulation offer rich possibilities for understanding the complexity of engineered systems, predicting their behavior, and verifying their correctness. Although modeling and simulation (M&S)has been around for several decades, its importance in research and application areas is just being exploited, especially with regard to challenges in M&S for engineering complex systems, according to a report by the U.S. National Science Foundation’s Blue Panel on Simulation-Based Engineering Science (SBES) [1], the White House’s American Competitiveness Initiative (ACI)[2], U.S. Congressional Caucuson M&S [3], and more [4-7]. The current state of M&S can be summarized as follows: 1. The importance of M&S in the design and development of physical systems is fairly well understood. 2. Research is moving ahead on challenges in M&S for engineering complex systems. 3. The references above all recommend the need for the emergence of an undergraduate discipline in SBES. 4. Major corporations offer great career opportunities for graduates with SBES knowledge. However, there is no known established program in electrical engineering which has identified the required skills, educational program requirements, training requirements, responsibilities, job descriptions, or labor codes. This presentation provides the elements of an M&S-based electrical engineering program and the challenges involved in the development and implementation of such a program for workforce development. This study was funded in part by NS Funder grantnumber 0737530.

Prof. Yi Qian
Department of Electrical and Computer Engineering
University of Nebraska-Lincoln
Lincoln, Nebraska, United States

Title: Security for Mobile Wireless Networks

Abstract: Wireless communication technologies are ubiquitous nowadays. Most of the smart devices have Cellular, Wi-Fi, Bluetooth connections. These technologies have been developed for many years, nonetheless they are still being enhanced. More development can be expected in the next 5 years, such as faster transmission data rate, more efficient spectrum usage, lower power consumption, etc. Similarly, cellular networks have been evolved for several generations. For example, GSM as part of 2G family, UMTS as part of the 3G family, and LTE as part of 4G family. In the next few years, cellular networks will continue the evolution to keep up with the fast-growing needs of customers. Secure wireless communications will certainly be part of other advances in the industry such as multimedia streaming, data storage and sharing in clouds, mobile cloud computing services, etc. This seminar gives an overview of the recent development in security for next generation wireless networks, especially in LTE/LTE-A and 5G mobile wireless networks. It will also give a discussion on the trend and future research directions in this area.

Prof. Dr. Roland Petrasch
Department of Computer Science
Faculty of Science and Technology
Thammasat University Rangsit Campus,
Patumthani, 12121 THAILAND

Title: Industry 4.0 and Smart Manufacturing - What are the New Technological Concepts?

Abstract: The term Industry 4.0 is used frequently with respect to German industry since 2011. It is often described as the new (fourth) industrial revolution enabling suppliers and manufacturers to leverage new technological concepts like CPS (Cyber-Physical Systems), Internet of Things, Big Data and Cloud Computing (CC): New or enhanced products and services can be created, cost be reduced and productivity be increased. Similar terms are Smart Factory or Smart Manufacturing. The ideas, concepts and technologies are not hype anymore - they are at least partly reality, but there is still a lot to do, e.g. standardization. What are these new (and old) technologies like IIoT (Industrial Internet of Things), Internet of Services, Cloud Computing, Big Data, CPS (Cyber-Physical Systems) behind Industry 4.0 and Smart Manufacturing? How do the components and technologies together? What are new or better applications in the context of Industry 4.0? This talk provides an overview and gives some answers to these questions.

Dr. S. Dhanjal, P. Eng.
Dept of Computing Science
Thompson Rivers University
KAMLOOPS, BC V2C 0C8, CANADA

Title: Digital Speech Processing of Two Languages: English and Punjabi

Abstract: Digital Speech Processing has many practical applications, including speech analysis/synthesis, speaker verification/identification and language identification. It is a research area that involves Computing Science, Electrical Engineering, Mathematics, Statistics, Linguistics, and Phonetics. Human speech is very complicated and no computer model can account for all the characteristics of speech production. However, the linear prediction analysis/synthesis model has gained popularity in digital speech processing because the mathematical theory is well-known, and the quality of speech synthesized by this model is almost indistinguishable from the original speech. With more than 140 million speakers in 150 countries, the Punjabi language is amongst the top 15 spoken languages. Although English and Punjabi have totally different phonetics, they have been investigated using linear prediction analysis. This talk will outline the problems encountered during the linear prediction analysis/synthesis of these two languages. It will be of interest to research scholars in many fields: Computing Science & Engineering, Information Technology, Linguistics, Literature, Mathematics, Computerized Speech Analysis & Synthesis, Natural Language Processing, and applications of Linear Algebra.

Prof. Dr. Sven-Hendrik Voss
Beuth Hochschule f�r Technik Berlin
University of Applied Sciences
Luxemburger Stra�e 10
13353 Berlin

Title: Towards Unique Performance using FPGAs in Modern Communication, Data Processing and Sensor Systems

Abstract: Modern innovative applications like machine-to machine (M2M) communication, multi-gigabit data networks, extensive sensor networks or data acquisition and big data analytics require an enormous amount of processing power and bandwidth. The traditional approach to deploy a processing and transmission infrastructure by cascading multicore CPUs, using offload engines and GPU cores is usually expensive and not always practical, thus building an obstacle in front of creative and innovative applications. This talk gives an overview of innovative approaches in digital hardware design far away from CPU load dependencies and multithread workarounds but with decisive hints towards fully integrated hardware solutions, thus opening doors for higher bandwidth, processing capabilites, reliabiliy and resolution, as well as least possible latency. The use of Field Programmable Gate Arrays (FPGAs) in combination with a sophisticated design methodology has proven to overcome many of the usual obstacles related to complex applications and enable high efficiency implementations. Intelligent circuit design helps in decreasing implementation size and power consumption. The described approaches will be reflected by specific design examples of challenging applications. In addition an overview of future research within this field is presented.

Edmond Cretu
Dept. of Electrical and Computer Engineering

The University of British Columbia

Adaptive Microsystems Lab
2332 Main Mall, room 3063
Vancouver, V6T 1Z4, BC, Canada

Title:

Abstract:

Yukio Ohsawa, PhD, Professor
Department of Systems Innovation
School of Engineering, The University of Tokyo
113-8656 Tokyo, Japan

Title: Discovery without Learning - A Lesson from Innovators Marketplace on Data Jackets

Abstract: In the workshop called Innovators Marketplace on Data Jackets (IMDJ), as I talked in SPIN2015, participants exchange abstracts of their data, requirements for data, and knowledge about data science, so that they discover ways to use/reuse/collect data. A lesson we learned from IMDJ recently is that users of data need methods for Discoveries without Learning, because they seek clues for decision making from data without significant patterns or coherent causalities. I this talk I show simple algorithms including Tangled String, applied to time series of earthquakes and of human behaviors in markets. The results show the approach of Discovery without Learning externalized useful clues for decision making.

Professor C. Sidney Burrus
Dept. of Electrical and Computer Engineering
Rice University, Houston, Texas, USA

Title:FIR Filter Design using Lp Approximation

Abstract:This paper applies the iterative reweighted least squares (IRLS) algorithm to the design of optimal Lp approximation filters. The algorithm combines a variable ptechnique with a Newton's method to give excellent robust initial convergence and quadratic final convergence. Details of the convergence properties when applied to the Lp optimization problem are given. The primary purpose of Lp approximation for filter design is to allow design with different error criteria in pass and stopband and to design constrained Lp approximation filters. The new method can also be applied to the complex Chebyshev approximation problem, and to the design of two-dimensional FIR filters. Initial work on the application to IIR filters has been made.


Ivan Linscott
PI Radioscience Experiment Electrical Engineering Department
Stanford University, USA

Title: First Results from The New Horizons Encounter at Pluto

Abstract: The instruments on board the New Horizons spacecraft, measured key characteristics of Pluto and Charon during the July 14, 2015, flyby. The data collected is being transmitted to Earth over the next 16 months. To date, high resolution images have been obtained along with spatially resolved spectroscopy in the infrared and ultraviolet revealing a world of extraordinary character. Additionally, during flyby the Radio Science Experiment (REX), in the NH X-band radio transceiver, recorded powerful uplink transmissions from Earth stations, as well as broadband radiometric power from the surface of Pluto and Charon. The REX recording of the uplinks produced a precise measurement of the surface pressure, the temperature structure of the lower atmosphere, and the surface radius of Pluto. In addition, REX measured thermal emission to a precision of 0.1K, from Pluto at 4.2-cm wavelength during two linear scans across the disk at close range when both the dayside and the night side were visible. This work was supported by NASA’s New Horizons project.

Zara M.K. Moussavi Director, Biomedical Engineering Program
Professor & Canada Research Chair
Department of Electrical & Computer Engineering
Winnipeg, Manitoba,Canada

Title: Transcranial Magnetic Stimulation of Brain as a treatment of Neurological Disorders.

Abstract: Dementia, and specifically Alzheimer's disease (AD), is a growing problem in our society as life expectancy increases. Current treatments for Alzheimer's disease are unable to cure or halt the progress of the disease, and have only mixed results in alleviating the symptoms. In recent years, non-invasive brain stimulation using repetitive Transcranial Magnetic Stimulation (rTMS) has been used as a potential treatment for Alzheimer’s disease. TMS uses a magnetic coil to induce an electric field in brain tissue. When used repetitively, it is called rTMS. It is usually applied to dorsolateral prefrontal cortex (DLPFC) bilaterally at high-frequency as a treatment for patients at various stages of Alzheimer's disease. The goal of current Alzheimer’s treatments by medication exercises is to increase the excitability and activity of remaining cells in order to counteract the decline in brain function. Other proposed treatments for AD, such as mental exercises, also aim to increase the level of activity in the brain. Since rTMS has been shown to be able to both stimulate activity and to increase excitability of neural tissue, we hypothesize that it will have a beneficial effect on patients for the same reasons as acetylcholinesterase inhibitors and mental exercises are useful. In this talk, current protocols and studies of rTMS application for treatment of Alzheimer’s disease including our own study will be introduce, and the results will be discussed.

Dr. Pan Agathoklis
Dept of ECE, University of Victoria
P.O. Box 1700, Victoria, B.C., V8W 2Y2, CANADA

Title: Image and Video Editing in the Gradient Domain using a Wavelet based Approach.

Abstract: There are many applications where a function has to be obtained by numerically integrating gradient data measurements. In signal and image processing, such applications include possible digital photography where the cmera is sensing changes in intensity instead of intensity as it is the case in most cameras today, rendering high dynamic range images on conventional displays as well as editing and creating special effects in images and video. A common approach to deal with this multi-dimensional (mD) numerical integration problem is to formulate it as a solution of an mD Poisson equation and obtain the optimal least-squares solution using any of the available Poisson solvers. Another area of application is in adaptive optics telescopes where wave front sensors provide the gradient of the wave front and it is required to estimate the wavefront by essentially integrating the gradient data in real time. Several fast methods have been developed to accomplish this, such as Multigrid Conjugate Gradient and Fourier transform techniques similar to those used in machine vision. A new 2 and 3-D reconstruction method based on wavelets has been developed and applied to image reconstruction for adaptive optics, image and video editing. This method is based on obtaining a Haar wavelet decomposition of the image directly from the gradient data and then using the well known Haar synthesis algorithm to reconstruct the image. This technique further allows the use of an iterative Poisson Solver at each iteration to enhance the visual quality of the resulting image and/or video. This talk focuses on image reconstruction techniques from gradient data and discusses the various applications where these techniques can be applied. They range from applications to advanced optical telescopes to image and video editing, shape from shading etc.

Dr Wenwu Wang
Centre for Vision Speech and Signal Processing
Department of Electronic Engineering
University of Surrey, Guildford GU2 7XH, United Kingdom

Title:

Abstract:

Prof. Takeshi Onodera
Research and Development Center for Taste and Odor Sensing Kyushu University
Fukuoka-shi, 819-0395, Japan

Title: Highly Sensitive Detection of Explosives Using Surface Plasmon Resonance Biosensor

Abstract: The presentation will focus on the recent developments in an “electronic dog nose” based on a portable surface plasmon resonance (SPR) sensor system and monoclonal antibodies to explosives for trace detection of explosives. We developed sutable sensor surfaces for the SPR sensor for on-site detection. For the SPR sensor to detect trace amount of explosives, the molecules of the explosives have to be dissolved in a buffer solution. Therefore, we have developed not only the appropriate sensor surfaces but also originally developed antibodies, collection procedure for trace explosives, and a protocol for on-site detection of explosives on the SPR sensor system. The sensor surface modified with self-assembled monolayers (SAMs) and portable type of SPR sensor systems were developed for on-site sensing. 5.7 pg/mL (ppt) of limit of detection (LOD) for 2,4,6-trinitrotoluen (TNT), which is one of explosives, was achieved using a combination of an indirect competitive assay (inhibition assay) formant and a polymer brush-modified sensor surface. To realize fast TNT detection, we also adopted a displacement method on the SPR system. In the displacement method, an antibody solution and a TNT solution do not require premix before measurement and can be injected sequentially. Judgement of detection can be used the slope of sensorgram in 10 s after the injection of the TNT solution. The LOD of TNT on the displacement assay format with a one-minute flow of TNT solution was 0.9 ng/mL (ppb), when a SAM surface containing ethylene glycol chain with DNP-glycine was used. Furthermore, a demonstration experiment of TNT detection in one min carried out successfully using the portable SPR sensor system with the displacement assay format and sample collection by wiping.

Jean-Pierre Leburton
Gregory Stillman Professor of Electrical and Computer Engineering,
Beckman Institute for Advanced Science& Technology.
University of Illinois at Urbana-Champaign, USA

Title: Genomics with Semiconductor Nanotechnology

Abstract: In the recent years there has been a tremendous interest in using solid-state membranes with nanopores as a new tool for DNA and RNA characterization and possibly sequencing. Among solid-state porous membranes the use of the single-atom thickness of monolayer graphene makes it an ideal candidate for DNA sequencing as it can scan molecules passing through a nanopore at high resolution. Additionally, unlike most insulating membranes, graphene is electrically active, and this property can be exploited to control and electronically sense biomolecules. In this talk, I will present a scenario that integrates biology with graphene-based field-effect transistor for probing the electrical activity of DNA molecules during their translocation through a graphene membrane nanopore, thereby providing a mean to manipulate them, and potentially identify by electronic technique their molecular sequences. Specifically, I will show that the shape of the edge as well as the shape and position of the nanopore can strongly affect the electronic conductance through a lateral constriction in a graphene nanoribbon as well as its sensitivity to external charges. In this context the geometry of the graphene membrane can be tuned to detect the rotational and positional conformation of a charge distribution inside the nanopore. Finally I show that a quantum point contact (QPC) geometry is suitable for the electrically-active graphene layer and propose a viable design for a graphene-based biomolecule detecting device.


Patrick Gaydecki
Professor, Sensing, Imaging and Signal Processing Group
School of Electrical and Electronic Engineering
University of Manchester Manchester M60 1QD, United Kingdom

Title: Real-time Digital Emulation of the Acoustic Cello using dCello

Abstract: We describe a device called dCello, which modifies the sound produced by an electric cello, producing an output signal which, when fed to an amplifier and loudspeaker, approximates closely the timbre of a high quality acoustic equivalent. Although the engineering details of the system are complex, the principles are straightforward. The signal produced by the pickup from the electric cello is first fed to a high-impedance preamplifier, converted into digital form and then processed by a digital signal processor operating at 550 million multiplication-additions per second (MMACs). The algorithm on the DSP device functions as the body of a wooden cello, which the electric cello lacks. It also operates so quickly that there is no perceptible delay between the bow striking the string and the corresponding sound generated by the amplifier. The unit incorporates a number of other functions to optimize the characteristics of the output to suit the acoustic properties of the ambient space or player preferences. These include a 20-band graphic equalizer, a versatile arbitrary equalizer, a volume control and an adjustable blender. The blender, which combines the original with the processed signal, extends the scope of the system for use with acoustic instruments fitted with pickups on the bridge. The unit is controlled by Windows-based software that allows the user to download new responses and to adjust the settings of the volume (gain), graphic equalizer and arbitrary equalizer. The device has already been used by a professional cellist during her performance at a music festival in the Netherlands, to considerable acclaim.

 

 

 

Invited Talk of SPIN2015


Carlos M. Travieso-Gonz�lez
Vice-Dean,University of Las Palmas de Gran Canaria
Institute for Technological Development and Innovation in Communications (IDeTIC)
Signals and Communications Department, Campus Universitario de Tafira, s/n
Pabell�n B - Despacho 111, 35017 - Las Palmas de Gran Canaria, SPAIN.

Title: Automatic Arousal Detection based on Facial Information: A Biomedical Tool.

Abstract: The research for Neurodegenerative diseases is increased during the last year and new techniques and methods are proposed. It is based on the relationship of the humanity and emotion, that cannot be separated and that is innate to humans. It has therefore been of great interest your study. They are trying to analyse why and how your emotions occur, trying to relate the events or reactions, physical and internal human body, in order to answer these questions, and be able to distinguish these emotions. A way of its detection is shown in this keynote. In particular, an automatic detection level of excitement or arousal is proposed through the labial movement of a person. This system is an innovative system, and nothing invasive, which can help the previous diagnosis and prolonged follow-up of a patient with various psychological or neurodegenerative disorders.


Professor Juan Luis Castro
Department of Computer Science and Artificial Intelligence
University of Granada, Spain

Title: From Tags Cloud to Concepts Cloud

Abstract: The spread of Web 2.0 has caused user-generated content explosion. Users can tag resources in order to describe and organize them. A tag cloud provides rough impression of relative importance of each tag within the overall cloud in order to facilitate browsing among numerous tags and resources. The main failing of these systems is that alternative tags can be used for the same concept and it can distort the tag cloud. In this lecture we analyse Tags Recomender Systems and Tags Cloud Representation, focus on systems able to create conceptually extended folksonomies. In this folksonomies each concept is represented as a set of multi-terms (alternative tags for the same concept), and the tag cloud is represented by using for every concept a canonical concept label. We will present TRCloud, a tag recommender system able to create a conceptually extended folksonomy from scratch. It uses an hybrid approach to detect an initial set of candidate tags from the content of each resource, by means of syntactic, semantic, and frequency features of the terms. Additionally, the system adapts the weights of the rest of candidates when a user select a tag, in function of syntactic and semantic relations existing among tags.


David M. Nicol
Director, Information Trust Institute
Franklin W. Woeltge Prof. of Electrical and Computer Engineering
University of Illinois at Urbana-Champaign
Urbana, Illinois, United States

Title: Modeling Trust in Integrated Networks

Abstract: Trust in a complex system is an expectation that the system will behave as expected, even in contexts and scenarios that were unforeseen. Development of trust models and means of evaluating them is a key problem in the design of integrated networks, which embody hierarchy, composition, and separation of function. Different layers have different trust attributions (e.g., one may focus on provisioning of connectivity, another on provisioning of bandwidth). The challenge for us is to develop means of reasoning about the overall end-to-end trust in the system, perhaps by composing trust models that have been developed for different layers. This talk identifies the issues and suggests an approach in the context of network access control.


Prof. K�roly Farkas
Department of Networked Systems and Services,
Budapest University of Technology & Economics,
Hungary, European Union

Title: Smart City Services Exploiting the Power of the Crowd

Abstract:Collecting data and monitoring our environment give the basis for smart city applications which are getting popular today. However, the traditional approach to deploy a sensing and monitoring infrastructure is usually expensive and not always practical forming an obstacle in front of creative and innovative application development.

Mobile crowdsensing can open new ways for data collection and smart city services. In this case, mobile devices with their built-in sensors and their owners are used to monitor the environment and collect the necessary data usually in a real-time manner with minimal cost. Thus, the power of the crowd can be exploited as an alternative of infrastructure based solutions for developing innovative smart city services.

In this talk, we give a short overview about the European COST ENERGIC Action (IC1203) focusing on the potentials of mobile crowdsensing in smart city services; the use of crowdsourced geographic data in government; and the requirements for a generic crowdsensing framework for smart city applications. Moreover, we present some case studies and sample scenarios in this field, such as a smart timetable service of a travel planner, which can be updated in real-time based on the continuously monitored time gaps by passengers between consecutive buses on public transportation routes.


Professor, Bj�rn ��r J�nsson
School of Computer Science,
Reykjav�k University, Iceland

Title: Are We There Yet? � Towards Scalability of High-Dimensional Indexing

Abstract: Due to the proliferation of tools and techniques for creating, copying and sharing digital multimedia content, large-scale retrieval by content is becoming more and more important, for example for copyright protection. Recently proposed multimedia description techniques typically describe the media through many local descriptors, which both increase the size of the descriptor collection and require many nearest neighbour queries. Needless to say, scalability of query processing is a significant concern in this new environment. To tackle the scalability, two basic categories of approaches have been studies. The typical �computer-vision-based� approach is to compress the descriptors to fit them into memory, while the typical "database-based� approach tackles scale by dealing gracefully with disk accesses. In order to cope with the Web-scale applications of the future, we argue a) that disk accesses cannot be ignored, and b) that scale can no longer be ignored in research on multimedia applications and high-dimensional indexing. This talk will give an overview of some major scalability results in the literature, with a strong focus on the database-based methods.


Prof. Dong Hwa Kim
Dept. of Electronic and Control Eng.,
Hanbat National University, South Korea.

Title: Smart City and ICT in Korea

Abstract: This lecture provides e-governance and new paradigm for knowledge based society using ICT. This lecture presents Seoul city as example of e-governance such as smart grid, smart city of Korea. With increasing this ICT, many countries have been investing for e-governance and the world�s major cities have embarked on smart city as one of e-government. For instance, Seoul, New York, Tokyo, Shanghai, Singapore, Amsterdam, Cairo, Dubai, Kochi and Malaga, and so on. Korea has strong competiveness in ICT such as ICT Development Index (ITU, 2011): ranking 1 among 159 Countries, E-Government Readiness Index (UN, 2010): ranking 1 among 192 Countries. Korea is also at global top-level in ICT infrastructure and service penetration all over the world. Recently, by using these infrastructures, we are preparing knowledge based new paradigm. That is, this ICT made Seoul�s implementation of its �Smart Seoul 2015� project, providing a best-practice guide to the construction and operation of a smart city, smart grid, and energy. Especially Seoul has the best condition for smart city (e-governance) such as ICT Infrastructure: Efforts to develop ICT infrastructure must anticipate future service demands; A well-defined �integrated city-management framework�; Increasing access to smart devices and education on their use, across income levels and age groups. And this lecture will provide R&D program of Korea and cooperation. Conclusion suggests many possible approaches and why it is important at this point to cooperate and how we can obtain a good idea for cooperation.


Professor Kiyoshi Toko
Distinguished Professor, Director,
Graduate School of Information Science and Electrical Engineering.
Kyushu University, Fukuoka, Japan

Title: Biochemical Sensors for Gustatory and Olfactory Senses

Abstract: Physical sensors have been developed since old days and utilized in the world, but chemical sensors which play the role of gustatory and olfactory senses have not been developed so far. Recently, these sensors have made rapid progress, and are named electronic tongues and noses, respectively. A taste sensor, which is a kind of electronic tongues, utilizes lipid/polymer membranes as the receptor part of taste substances. This sensor has a property of global selectivity that implies a potential to decompose the taste into five basic taste qualities (sweetness, bitterness, sourness, saltiness, umami) and quantify them. The taste sensor system is composed of, at least, five different sensor electrodes, each of which responds to several kinds of chemical substances with the same taste in a similar way, but shows no response to substances with other taste qualities. The taste sensor is now sold in the world and utilized in food and pharmaceutical companies. On the other hand, there are many types of electronic noses according to materials and measurement principles such as oxide semiconductor, quartz crystal microbalance (QCM), surface plasmon resonance (SPR), and conductive polymer. An electronic nose with SPR and antigen-antibody interaction can detect explosives such as trinitrotoluene (TNT) at ppt level, which is superior to dog noses. This electronic dog nose just comes into real use.


Masahito Togami, Ph.D.
Senior Researcher, Unit Leader, Intelligent Media Systems
Research Department, Central Research Laboratory
Hitachi Ltd., Japan

Title: Time-Varying Multichannel Gaussian Distribution Model for Speech Dereverberation and Separation

Abstract: In this talk, I will introduce a recently proposed time-varying multichannel Gaussian distribution model for speech signal processing, which reflects time-varying characteristics of speech sources. The time-varying multichannel Gaussian distribution model can be integrated naturally with several Gaussian based methods, e.g. Kalman filtering, Multichannel Wiener filtering. In the time-varying multichannel Gaussian distribution model, it is easy to put on and take off Gaussian distribution models for specific purposes. Additionally, optimization of the parameters are done efficiently by using the EM algorithm. In addition to introduction of the time-varying multichannel Gaussian distribution model, I introduce applications of the multichannel Gaussian distribution model for noise reduction, dereverberation, and echo reduction.


Dr. J�nos MIZSEI,
Professor, Department of Electron Devices.
Budapest University of Technology,
Hungary, European Union

Title: Thermal-electric Logic Circuit: A Solution for Nanoelectronics.

Abstract: Until now, the continuous development of electronics has been characterized by Moore�s law. The scale down resulted in the nanosized CMOS integrated circuits, pushing the �red brick wall� towards the lower dimensions. Although the current CMOS integrated circuit development is driven by a lot of innovations, there are still some limits determined by unavoidable physical effects such as tunneling of charge carriers through thin insulating regions and statistical irregularities in the number of dopant atoms.

On the other hand, there are many new ideas for building atomic or molecular scale devices for the information technology. However, there is still a gap between the up-to-date �top-down� CMOS technology and the �bottom-up� devices, i.e. molecular electronics, nanotubes, single electron transistors. The new thermal-electric device (phonsistor) and the CMOS compatible thermal-electric logic circuit (TELC) may help to fill this gap. The device is based on the semiconductor-metal transition (SMT) effect shown by certain materials. This effect allows an electric resistance change in three-four orders of magnitude induced by thermal or electrical excitation. The recently proposed novel active device (phonon transistor = phonsistor) is made up of only bulk type semiconductor domains, consisting of significantly less regions, interfaces, and providing advanced functionality compared to a monolithic MOSFET (there are no differently doped regions, pn junctions at all). This way the single switches can be processed in steps that are technologically less demanding and fewer in number. The thermal-electric logic circuit (TELC) switch can be excited by electronic and thermal signals as well, thus two different physical parameters are available for representing the different logic states.


Mort Naraghi-Pour, Ph.D.
Michael B. Voorhies Distinguished Associate Professor
School of Electrical Engineering and Computer Science
Louisiana State University, Baton Rouge, LA, USA

Title: Hypothesis Testing in Wireless Sensor Networks in the Presence of Misbehaving Nodes.

Abstract: Wireless sensor networks (WSNs) are used in many military and civilian applications including intrusion detection and surveillance, medical monitoring, emergency response, environmental monitoring, target detection and tracking, and battlefield assessment. In mission critical applications of WSNs, the security of the network operation is of outmost importance. However, traditional network security mechanisms are not adequate for distributed sensing networks. This is due to the fact that these networks cannot be physically secured making the senor nodes vulnerable to tampering. For example, an adversary may tamper with legitimate sensors or deploy its own sensors in order to transmit false data so as to confuse a central processor. False data may also be due to sensor node failures. In large WSNs with hundreds or thousands of nodes, many nodes may fail due to hardware degradation or environmental effect.

In this talk we consider an important application, namely the problem of detection using WSNs in the presence of one or more classes of misbehaving nodes. Binary hypothesis testing is considered along with decentralized and centralized detection. In the former case the sensors make a local decision and transmit that decision to a fusion center. In this case we identify each class of nodes with an operating point (false alarm and detection probabilities) on the ROC (receiver operating characteristic) curve. In the latter case the sensor nodes transmit their raw data to the fusion center. In this case the nodes are identified by the probability density function (PDF) of their observations. To classify the nodes and detect the underlying hypothesis, maximum likelihood estimation of the operating point or the PDF of the sensors� observations is formulated and solved using the Expectation Maximization (EM) algorithm with the nodes� identities as latent variables. It is shown that the proposed method significantly outperforms previous techniques such as the reputation based methods.


Patrick Gaydecki
Professor, Sensing, Imaging and Signal Processing Group
School of Electrical and Electronic Engineering
University of Manchester Manchester M60 1QD, United Kingdom

Title: A commentary on Theories of Time and their Implications for Digital Signal Processing

Abstract: The concept of absolute time was introduced by Newton in his work �Philosophi� Naturalis Principia Mathematica�, in which it was stated that time existed independent of any reference and any observer, flowing uniformly without regard to external influences or factors. This provided a theoretical foundation for the establishment of Newtonian mechanics and continues to be applied, successfully, in our quantitative treatment of physical processes. Since then there have been several revolutions in our understanding of time, all of which to a lesser or greater degree reveal that time cannot, on the macroscopic (Einsteinian) or microscopic (quantum) scale be considered as absolute and uniform, but instead is inextricably linked to a particular frame of reference and the fine-grain structure of the universe. This paper seeks to explore key concepts in our understanding (and misunderstanding) of time, and how the measurement of time is central to digital signal processing, itself predicated on regular, periodically sampled information.


Professor Philip Hall,
Distinguished Lecturer, IEEE Society on Social Implications of Technology (SSIT),
Department of Electrical & Electronic Engineering ,
The University of Melbourne, Australia

Title:Climate Divergence � Fact or Fiction? Synoptic Characterisation as a Methodology for Short-to-Medium Climate Analysis and Forecasting

Abstract: It is widely accepted that global climate change is having an increasingly dramatic impact on water, energy and food security. Establishing a connection between regional climate variability and rainfall delivery variability associated with extreme events will enable us to gain an improved understanding of the potential impacts of climate change on essential human activities � such as broadacre farming � via the rainfall delivery mechanism. Therefore, being able to understand these events and their transitional behaviours is of prime importance.

Characterisation methodologies have, to date, not been widely used to study meteorological phenomena. Where they have been successfully applied for this purpose, climate data more than synoptic data has been used and the primary focus has been on analysing the medium-to-long term trends rather than trends in short-to-medium term climate variability. However, historical synoptic data shows that recent climate variability displays greater divergence from the long term trend, suggesting that short-to-medium term climate variability can be analysed using the synoptic characteristics of the delivery mechanism rather than the occurrence of extreme events. This characteristic of climate change, being the trend of short-to-medium term variability of atmospheric parameters from the long term trends, is defined by the author as climate divergence. Importantly, therefore, if we are to understand the variation in rainfall delivery and water availability associated with climate change and its potential impact on natural resources and reliant human activities � such as soil and agriculture � then we must consider the climate divergence from long term trends (both historical and future forecasts), rather than the long term trends themselves.

Synoptic characterisation (in the meteorological context) is defined as a technique that uses synoptic data to identify and study the distinctive traits and essential dynamic features, such as behavioral characteristics and trends, of atmospheric variables associated with meteorological phenomena. This paper seeks to demonstrate that synoptic characterisation (meteorological) can be used to assist us establish a connection between climate divergence and deviations in rainfall patterns, and thus can be adapted as an effective short-to-medium term climate analytical and forecasting tool.

The development of such a tool, together with better monitoring technologies and data collection options, will provide a framework for better decision making and risk management. Information gained through the synoptic characterisation of regional climate, in conjunction with other data gathering activities, can enhance the basis for studies that provide a large portion of the data required for evaluating and validating numerical regional and global scale climate models. Information from these studies indirectly assists in the evaluation of the impacts due to potential future climate changes on the regional hydrologic system.


Radim Burget.
Group Leader � Data Mining Group, Signal Processing Laboratory.
Department of Telecommunications, Brno University of Technology.
Brno, Czech Republic, European Union.

Title: Process Optimization and Artificial Intelligence: Trends and Challenges.

Abstract: Business process optimization have become increasingly attractive in the wider area of business process intelligence. Although many research work have been done in this area, its transfer from research laboratories into a business environment often fail very often. There are plenty of obstacles that prevents its deployment into industry. This presentation will provide overview of technologies being used in one of the system, that has successful path from research lab into business deployment. Furthermore it will discuss complementary technologies related to the artificial intelligence the helps in controlling complex processes.


Chris Rizos
President, International Association of Geodesy (IAG)
Professor, Geodesy & Navigation
Surveying & Geospatial Engineering
School of Civil & Environmental Engineering
The University of New South Wales,
Sydney, AUSTRALIA

Title: Precise GNSS Positioning � the Role of National and Global Infrastructure and Services.

Abstract: Precise positioning � defined broadly as positioning accuracy higher than about one metre � is something that GPS was never intended to deliver. However, starting in the 1980s, a series of innovations ensured that centimetre-level accuracy could be achieved. The primary innovation was the development of the differential or relative GPS positioning mode, whereby positioning of a receiver, in real-time and even if moving, was done using GPS data from a static reference station. DGPS was refined over the 1980s and 1990s to become an extremely versatile precise positioning and navigation tool. It has revolutionised geodesy, surveying, mapping and precise navigation. Furthermore, since the 1990s many governments, academic institutions and private companies have established �continuously operating reference stations� (or CORS) as fundamental national positioning infrastructure. In 1994 the International GPS Service (IGS) was launched, characterised by a globally distributed GPS CORS network (now numbering over 400 stations) whose data was used to compute precise satellite orbit and clock information. Such a service continues to provide vital information to support geoscience, national geodetic programs, and precise positioning in general.

We are witnessing the launch of a surge of new navigation satellite systems, with a commensurate increase in satellites and signals, new receiver techniques and an expansion in precise positioning applications. This heralds the transition from a GPS-dominated era � that has served the community for almost 30 years � to a multi-constellation Global Navigation Satellite System (GNSS) world. These new GNSSs include the modernized U.S. controlled GPS and the Russian Federation�s GLONASS constellations, China�s new BeiDou system, the E.U. Galileo, as well as India�s Regional Navigation Satellite System (IRNSS), and Japan�s Quasi-Zenith Satellite System (QZSS). Next generation CORS infrastructure is being deployed, and new precise positioning products are being generated. In addition, new positioning techniques not based on DGPS principles are being developed. One that shows considerable promise is the Precise Point Positioning (PPP) technique. Furthermore, precise positioning is becoming mainstream and it is predicted that a massive new class of users will embrace the precise GNSS positioning technology. This paper will explore developments in precise GNSS positioning technology, techniques, infrastructure, services and applications.


Prof.Dr.C.P.Schnorr
Johann Wolfgang Goethe Universit�t
Fachbereich Mathematik
AG Mathematische Informatic 7.2
Frankfurt, Germany.

Title: Towards Factoring Integers by CVP Algorithms for the Prime Number Lattice in Polynomial Time.

Abstract: We report on progress in factoring large integers N by CVP algorithms for the prime number lattice L. For factoring the integer N we generate vectors of the lattice L that are very close to the target vector N that represents N. Such close vectors yield relations mod N given by pn-smooth integers u, v, |u - v N|, that factor over the first n primes p1,.. pn. We can factor N given about n such independent relations u, v, |u - v N| Recent improvements.

�   We perform the stages of enumerating lattice vectors close to N according to their success rate to provide a relation u, v, |u - v N|. The success rate is based on the
    Gaussian volume heuristics that estimates the number of lattice points in a sphere of dimension n of a given radius and having a random center.

�   In each round we randomly fine each prime pi for i=1,�,n with probability 1/2 by doubling the pi coordinates of the vectors in L. By the random fines we generate
    independent relations mod N.

�   We extremely prune the enumeration of lattice vectors close to N generating a very small fraction of close vectors efficiently, still providing n relations mod N.

�   The original method creates pn-smooth u, v, |u - vN|. We must extend the method to non-smooth v because for large N there are not enough relations with smooth v. The
    smoothness of v does not help to factor N it merely results from the CVP algorithm for L.

Right now we create one relation mod N for N � 1014 and n = 90 primes in 6 seconds per relation. For much larger N there are not enough relations with pn-smooth v. but there exist enough relations for arbitrary v. A main problem is to extend the method for directing and pruning the search for successful v from smooth to arbitrary v. For N � 2800 and n = 900 primes there are about 2.5 � 1011 relations mod N corresponding to lattice vectors close to some target vector Nv that represents vN , enough relations for the efficient generation of 900 relations mod N and to achieve a new record factorization. So far we have implemented the algorithm only for pn-smooth v. Now we extend it to arbitrary v. Importantly, the prime basis for the CVP method is much smaller than for any other known factoring method.


Professor Stephen Pistorius,
Director of Medical Physics Graduate Program
Vice Director: Bio-Medical Engineering Graduate Program
Cancer-Care Manitoba, Canada

Title: Signal Processing and Analysis of Microwave Signals for Breast Cancer Imaging and Detection.

Abstract: Annually, approximately 1.3 million women worldwide will be diagnosed with breast cancer and about 465,000 will succumb to it, particularly in regions where access to screening is limited. Early detection and effective treatment are major factors contributing to long-term survival. X-Ray mammography is the current standard for the detection of breast cancer. While x-ray mammography has led to a decrease in mortality rates, high capital and human resource requirements as well as significant false positive and negative rates offers room for improvement.

Microwaves have been used to retrieve quantitative and qualitative images of objects-of-interests (OI) for many years. Since the late 1970�s, Microwave Imaging (MWI) has been investigated for biomedical applications including systems for imaging animal extremities, chemotherapy monitoring, calcaneus and heel imaging, and breast cancer detection and imaging. This technology is based on the differences between the dielectric properties of healthy and malignant breast tissues in the microwave frequency range. MWI may prove to be less harmful and stressful for the patient, since it does not require breast compression, the signals are not ionizing and have a power of less than 10 dBm. There are various options in the design of biomedical MWI systems, as well as the associated options in the mathematical formulation of the corresponding scattering problem. These options impact the imaging performance of the system, e.g. different algorithms and regularization techniques have been implemented to treat the inherent nonlinearity and ill-posedness of such problems as well as different experimental techniques used to collect data for the algorithms.

The two major MWI modalities are Microwave Tomography (MT) and Breast Microwave Radar (BMR). The basic MWI experimental system consists of a chamber where the OI to be imaged, is placed. Microwave are introduced via antennas within the chamber. The microwave field or signal are measured using antennas, solid-state sensors or field probes distributed inside the chamber. MT techniques form a dielectric profile using electromagnetic waves of selected microwave frequencies and by solving a nonlinear and ill-posed inverse scattering problem. Breast Microwave Radar (BMR) uses Ultra Wide Band (UWB) signals to form a reflectivity map of the scanned region. While BMR approaches cannot generate a dielectric map, they determine the location of strong scattering signatures which are associated with malignant lesions and are capable of forming high contrast 3D images where mm size inclusions can be resolved.

During the last ten years, our research groups have been working on the development of novel reconstruction algorithms and sensing technologies to increase the quality of MT and BMR images. This presentation will focus on a number of novel approaches that we are investigating. These include i) BMR Holography, which processes the spectrum of the recorded responses from the breast structure and compensates for the effect of the scan geometry to create an accurate reconstruction, ii) the Modulated Scattering Technique (MST), which uses small probes to reduce the field perturbation, allowing simpler MT inversion techniques and iii) the use of small spintronic devices which can detect the amplitude and phase of the microwave signal over a wide frequency band in order to determine the time delay of a microwave signal scattered by the target.

These techniques require advanced signal processing and analysis in order to reconstruct images of objects that have small radar cross sections and small contrast to noise ratios. In this presentation, I will describe the techniques we are applying and will use phantom and patient images to illustrate the benefits and challenges that still face us.


Professor Tsuyoshi Isshiki
Dept. Communications and Computer Engineering
Tokyo Institute of Technology, Tokyo 152-8552, JAPAN

Title: Application-Specific Instruction-Set Processor (ASIP) Designs for Real-Time Video Signal Processing

Abstract: Various image processing systems such as panel display engines and camera/video interface image engines require extremely high data rate as well as high quality images, that are enabled only by a very concerted effort on both the image processing algorithm designs and hardware architectures. This talk focuses on the implementation of such image processing systems based on Application-Specific Instruction-Set Processor (ASIP) technology to provide the necessary data throughput as well as flexibility with very short design time.


Professor Ilangko Balasingham
Signal Processing Group
Department of Electronics and Telecommunications
Norwegian University of Science and Technology
N-7491 Trondheim, Norway

Title: Intra-Body Communications, Localization, and Imaging Using RF and Molecular Signals

Abstract: The healthcare sector in the western and developing countries will face difficult challenges in the coming years as the aging population as well as the number of people suffering from chronic diseases such as diabetes and cardio vascular illnesses are increasing dramatically. The cost for their treatment and care will also increase and put enormous strain on the national economy. Therefore, it is urgent to develop early diagnostics, treatment and monitoring solutions for these clinical conditions.

Wireless sensor technology can play an important role in the development of these solutions. For instance, the wireless biomedical sensor network systems can help with remote diagnosis and health status monitoring of chronic patients. In this talk we will show examples from theory to pre-clinical prototypes of using ultra wideband technology for high data rate communication such wireless capsule endoscopy (WCE), localization and tracking WCE, and microwave imaging of heart with valves opening and closing.

However, some of the drawbacks of using electromagnetic or other conventional means of wireless signal transmission through tissues for connecting implantable devices are the large signal attenuation leading to frequent battery replacements, heat dissipation damaging the tissues, and cavitation producing bubbles that potentially cause stroke and/or heart failure. In this talk we will show a complementary technology for designing and developing nanoscale devices is using biological cells, molecules, and DNA structures using concepts stemming from biology and nature. This talk attempts to highlight the possibility to use the human nervous system for sensing, signalling and actuation in a controlled manner. Typical applications can be targeted drug delivery, brain-machine interface, Parkinson and Alzheimer�s disease control, etc.


Prof. Yukio Ohsawa
Professor, Department of Systems Innovations.
School of Engineering,
The University of Tokyo, Japan

Title: Innovators Marketplace on Data Jacket for Practical Data Sharing - an Application of Educational and Innovative Communications.

Abstract: In this talk I introduce Innovators� Marketplace on Data Jackets (IMDJ), a market of data that enables data-based innovations. Here, owners of confidential datasets can hide them, but can show only the digest of the dataset only to allowable extent. Based on thus collected digests called Data Jackets (DJs), latent links among datasets are visualized to aid stakeholders' communication about latent requirements and solutions for satisfying the requirements. As a result, stakeholders of problems in businesses and sciences come to externalize and share the value of datasets and of tools of data mining. Experimental results show the effects of IMDJ, that enhance stakeholders� motivation to share data and to create plans for user-centric knowledge discovery with data mining.


Prof. Bengt Lennartson
Department of Signals and Systems
Chalmers University of Technology, SE-412 96 G�teborg, Sweden

Title: Modeling and Optimization of Hybrid and Discrete Event Systems - A Unified Approach

Abstract: For discrete-event dynamic systems a number of different modeling approaches exist. The most common ones are automata, Petri nets, and State Charts. These models are unified but also extended by a recently proposed predicate transition model (PTM). A supervisor synthesis procedure is also developed for this model class, where supervisor guards are efficiently generated, and the resulting supervisor is easily implemented in industrial control systems. The close connection between the proposed PTM and continuous-time state-space models makes it also natural to generalize PTM to hybrid systems, involving both continuous-time and discrete-event dynamics. For the resulting hybrid PTM, an optimization procedure is proposed for industrially relevant problems such as energy optimization of robot stations. It is especially discussed how this problem can be optimized based on integrated optimization, involving both Constraint and Mixed Integer NonLinear Programming.


Prof. Ljiljana Trajkovic,
President, IEEE Systems, Man, and Cybernetics Society
School of Engineering; Science Simon Fraser University
University Drive, Burnaby, Canada

Title: Communication Networks: Traffic Data, Network Topologies, and Routing Anomalies

Abstract: Understanding modern data communication networks such as the Internet involves collection and analysis of data collected from deployed networks. It also calls for development of various tools for analysis of such datasets. Collected traffic data are used for characterization and modeling of network traffic, analysis of Internet topologies, and prediction of network anomalies.

In this talk, I will describe collection and analysis of real-time traffic data using special purpose hardware and software tools. Analysis of such collected datasets indicates a complex underlying network infrastructure that carries traffic generated by a variety of the Internet applications. Data collected from the Internet routing tables are used to analyze Internet topologies and to illustrate the existence of historical trends in the development of the Internet. The Internet traffic data are also used to classify and detect network anomalies such as Internet worms, which affect performance of routing protocols and may greatly degrade network performance. Various statistical and machine learning techniques are used to classify test datasets, identify the correct traffic anomaly types, and design anomaly detection mechanisms.


Prof. Paulo M. Mendes,
Dept. of Industrial Electronics,
University of Minho, Portugal

Title:Towards Long-Term Intracranial Pressure Monitoring Based on Implantable Wireless Microsystems and Wireless Sensor Networks

Abstract:Ambient Assisted Living (AAL) aims to provide support to healthcare professionals making use of sensing, and information and communication technologies. Brain related information is becoming more and more relevant for many pathologies, but access to long-term information from brain to feed such AAL technologies is still giving the first steps. One main issue when recording signal from the brain is the available room for sensing device placement. Since available room is limited, battery-less solutions are welcome. Also, AAL solutions to be developed should consider not only the sensing device, but also the entire supporting framework. This paper presents a solution for long-term monitoring of intracranial pressure using a wireless micro-device and a wireless sensor network. This talk will discuss signal processing issues that need to be solved to enable a solution achieving enough miniaturized pressure sensor, powered by a wireless link, and suitable for use with a reliable wireless sensor network to support the data acquisition and analysis.


Prof. Jorge Casillas, PhD,
Dept. Computer Science and Artificial Intelligence
University of Granada, SPAIN

Title: Association Stream Mining and its Use in the Analysis of Physiologic Signals.

Abstract: The uprising bulk of data generation in industrial and scientific applications has fostered the interest of practitioners for mining large amounts of unlabeled data in the form of continuous, high speed and time-changing streams of information; what we called association stream mining. Contrary to the well-know approaches for finding frequent items in data streams, this appealing field of association stream mining regards on modeling dynamically complex domains via production rules without assuming any a priori structure. Its goal is to extract interesting associations among the forming features of such data in an unsupervised manner adapting itself to the environment.

In this talk, previous research on related topics is reviewed, new algorithms are introduced and real-world applications are presented. Among them, special attention is made to some original results on finding relationships among different human biosignals. Indeed, despite the knowledge that the human organism is an integrated network of physiological systems, probing interactions between these systems remains a daunting task that calls for more sophisticated methods of multivariate analysis. Here, we use association stream mining to explore relationships between a series of physiologic variables (electrodermal response, respiration, electromyogram and heart rate) during resting states and after exposure to stressful stimuli.


Prof. Vincent Vigneron,
Universite d'Evry Val d'Essonne,
UFR ST Equipe STIC et Vivant, France.

Title:Small and Big Data: in Essence a Numerical Detective Work.

Abstract: In the past, small data coincide with classical statistics. Small referred often to the sample size (usually between 20-50 individuals), not to the number of variables. But the size is not the only critical aspect and one can also point the readiness of the data for the analysis, the populations where these data sourced, data uncertainty, etc. Typically, big data refers to multiple, non-random samples of unknown populations. Classical statistics considers big data as data not being small or a sample size after which asymptotic properties play favourably for valid results. A sample size greater than 50,000 individuals and more than 50 variables can be considered as big. Big data are �secondary� in nature; that is, they are not collected for an intended purpose. They are available from (not only) the �marketing department� and this makes the big data analytics a challenging task. Data analysis is the final and the most important phase in the value chain of big data, with the purpose of extracting useful values, providing suggestions or decisions. Various data mining algorithms have been developed, including artificial intelligence, machine learning, mode identification, statistics and database community, etc. These algorithms cover [vdA12]: cluster analysis, factor analysis, association analysis, regression, classification, all of which of major interest. The key aspect lies in the data representation because many data types are supported in database systems including continuous/discrete numeric values, categorical variable, binary values, non-negative value, etc. Moreover data representation impact on training time [Dat90]. While the goal of data mining is to extract valuable information from data, it is an undeniable fact that the quality of the results relates directly to the quantity and quality of the data being mined. That�s why I will compare in this talk to on separated database the added value of 2 algorithms for both dimensionality reduction and feature extraction: the first one analyses binary datasets employing prior Bernoulli statistics and a partially non negative factorization of the related matrix of log-odds [TSV+13], the second (model-free) one decomposes huge data tensor and overcomes size restrictions by interpreting a a tensor as a set of sub-tensors and proceeds the decomposition sub-tensor by sub-tensor [VKL14].


Prof. Magdy A. Bayoumi,
Director, The Center for Advanced Computer Studies,
University of Louisiana at Lafayette (UL Lafayette), USA.

Title: Cyber-Physical Systems: Reality, Dreams, and Fantasy

Abstract: The integration of physical systems with networked sensing, computation networks, and embedded control with actuation has led to the emergence of a new generation of engineered systems, the Cyber-Physical Systems (CPS). Such systems emphasize the link between cyber space and physical environment (i.e., time, space, and energy). CPS represents the next generation of complex engineering systems. They are large scale dynamic systems that offer significant processing power while interacting across communication networks. CPS will help to solve the grand challenges of our society, such as, aging population, limited resources, sustainability, environment, mobility, security, health care, etc. Applications of CPS cover a wide band of economic, medical, and entertainment sectors. It includes; Transportation: automobiles, avionics, unmanned vehicles and smart roads; Large Scale Critical Infrastructure: bridges, mega buildings, power grid, defense systems; Health Care: medical devices, health management networks, telemedicine; Consumer Electronics: video games, audio/video processing, and mobile communication. Building Cyber-Physical Systems is not a trivial task. The difficulty arises from the existing gap in modeling and computing of the physical and cyber environments. The design process require new theories, models, and algorithms that unify both environments in one framework. None of the current state-of-the art methods are able to overcome the challenges of developing the unified CPS design paradigm. Several of these issues will be discussed in this talk. Case studies of real world CPSs will be illustrated.


Professor Torbjorn Svendsen,
Department of Electronics and Telecommunications
NTNU, N-7491 Trondheim, Norway

Title:Detection-based speech recognition and unit discovery � shifting from top-down to bottom-up processing

Abstract:Automatic speech recognition is conventionally performs a top-down decoding of a speech utterance. This is typically formulated as finding the sequence of predefined sub-symbolic units (typically phonemes) that maximizes the likelihood of a sequence of meaningful discrete symbols (typically words). After defining the vocabulary of symbols, and learning the statistical relationships of the symbols from large amounts of text, building a recognizer then consists of specifying the set of sub-symbolic units; defining the structure of the symbols in terms of these units; and learning statistical models for the sub-symbolic units. The latter task requires massive amounts of training data in order to capture real-world variability.

An alternative approach to top-down decoding is to base the recognition on a bottom-up, detection-based approach incorporating data-driven structure learning. Speech signals are produced by a physical system with limited number of degrees of freedom, imposing strong constraints on the relevant structure of the signal. Yet, on the surface level the structure is hidden by variation in the control of the articulators and background noise. The human speech production apparatus is language independent, indicating that basing the recognition on bottom-up detection of fundamental parameters of speech production can be more universal than a top-down modeling of acoustic observations to linguistic units like e.g. phones. Further advantages of the approach are that the detectors can be individually designed, with different input representation of the speech acoustics, and that the approach does not require strict synchronicity between the outputs of the detectors.

Many detection-based approaches have, like conventional approaches, been linked to sets of pre-defined linguistic units. The phonemes are abstract classes that contain large variations in physical realization. Furthermore, the phonemes are language dependent, making it a necessity to acquire large amounts of language specific data for training the acoustic models. The bottom-up approach does on the other hand lend itself to investigations into unit discovery, i.e. learning 'atomic' units of speech from information extracted from the detectors and subsequently learning the mapping between 'atomic' units and meaningful symbols like words. It is likely that these atomic units will be less language dependent, and thus will reduce the requirements for language specific training data significantly, something that is of particular importance development of speech recognition for under-resourced languages.


Dr. Kamil Riha,
Department of Telecommunications, Brno University of Technology
Brno, Czech Republic, European Union

Title:Ultrasound images and image sequences processing for medical utilization

Abstract:The contribution deals with possible methods of high-accuracy, successful and effective localisation and tracking of the artery in ultrasound image sequences. The method for detection and tracking has to work for a large group of shape variants of the artery being measured, which normally occurs during the examination due to the pressure on the examined tissue, tilt of the probe, the setup of the sonographic device, etc. The utilization of this method is still evolving field of the automation of the process of determining the parameters of the circulatory system in the non-invasive clinical diagnostics of cardiovascular diseases. The general goal in a given area is to extract signals contained in spatio-temporal records of organs by non-invasive way. As a part of this general goal, modern methods of artery localisation in ultrasound image will be described together with related methods for high-accuracy artery wall tracking and measurement in the ultrasound videosequence.


Professor Cham Athwal,
Associate Head of School (Research)
School of Digital Media Technology
Birmingham City University, Birmingham, U.K

Title:Surface Detection in Textured or Noisy 3D Image Sets

Abstract:The problem of detecting accurate surface information in 3D image sets is conventionally addressed using gradient based methods. These apply a first derivative computation of an image volume across three dimensions measuring the changes in the intensity profile of any neighbouring voxels. However, a continual problem for gradient based operators is their performance on images where the boundaries are not clearly defined by intensity. Real image data, such as that offered by computed tomography (CT) often exhibit boundaries with a low level of contrast, particularly between areas of soft tissue, while histology and MRI often possess excessive amounts of texture giving rise to multiple internal edges.

Here we present a new 3D statistical method for surface detection which provides improvements over competitive methods both in terms of noise suppression and detection of complete surfaces. The methods are applied to both synthetically created image volumes, and real MRI data. Accuracy against a ground truth is assessed using the quantitative figure of merit performance measure, with the statistical methods shown to outperform both a 3D implementation of the gradient Canny operator and a 3D optimal steerable filter method. The results also confirm how 3D surface detection methods are able to locate complete boundaries, irrespective of the object orientation and plane of image capture, avoiding the problems that 2D methods encounter when trying to locate surfaces that exist across the plane of a 2D slice.


Prof. D. Matsakis,
Head , Time Section
US Naval Observatory, Washington, USA

Title:Towards Long-Term Intracranial Pressure Monitoring Based on Implantable Wireless Microsystems and Wireless Sensor Networks

Abstract:Ambient Assisted Living (AAL) aims to provide support to healthcare professionals making use of sensing, and information and communication technologies. Brain related information is becoming more and more relevant for many pathologies, but access to long-term information from brain to feed such AAL technologies is still giving the first steps. One main issue when recording signal from the brain is the available room for sensing device placement. Since available room is limited, battery-less solutions are welcome. Also, AAL solutions to be developed should consider not only the sensing device, but also the entire supporting framework. This paper presents a solution for long-term monitoring of intracranial pressure using a wireless micro-device and a wireless sensor network. This talk will discuss signal processing issues that need to be solved to enable a solution achieving enough miniaturized pressure sensor, powered by a wireless link, and suitable for use with a reliable wireless sensor network to support the data acquisition and analysis.


Prof. Victor C.M. Leung,
Laboratory for Wireless Networks and Mobile Systems
Communications Group,
Dept. of Electrical and Computer Engineering
The University of British Columbia, Vancouver, BC, Canada V6T 1Z4

Title:Robust Access for Wireless Body Area Sensor Networks

Abstract: Recent advances in very-low-power wireless communications have stimulated great interest in the development and application of wireless technology in biomedical applications, including wireless body area sensor networks (WBASNs). A WBASN consists of multiple sensor nodes capable of sampling, processing, and communicating one or more vital signs (e.g., heart rate, brain activity, blood pressure, oxygen saturation) and/or environmental parameters (location, temperature, humidity, light) over extended periods via wireless transmissions over short distances. Low cost implementation and ubiquitous deployment calls for the use of license-exempt ISM bands, in which co-existence of other license-exempt devices, particular WiFi radios, negatively impacts on the robustness of WBASNs. We shall present some proposals to increase the robustness of wireless access in WBASNs by identifying and taking advantages of spectrum holes that are unused by co-existing devices. Simulation and experimental results are presented to show the effective of our proposals in increasing the robustness of channel access in WBASNs.

Invited Talks of SPIN2014


Prof. Chin-Hui Lee, PhD
Professor, Center for Signal and Image Processing.
School of Electrical and Computer Engineering.
Georgia Institute of Technology. Atlanta, GA. 30332-0250, USA

Title: Discriminative Training from Big Data with Decision-Feedback Learning

Abstract: Recently discriminative training (DT) has attracted new attentions in speech, language and multimedia processing because of its ability to achieve better performance and enhanced robustness in pattern recognition than conventional model training algorithms. When probabilistic densities are used to characterize class representations, optimization criteria, such as minimum mean squared error (MMSE), maximum likelihood (ML), maximum a posteriori (MAP), or maximum entropy (ME), are often adopted to estimate the parameters of the competing distributions. However the objective in pattern recognition or verification is usually different from density approximation. On the other hand decision-feedback learning (DFL) adjusts these parameters according to the decision made with the current set of estimated discriminants such that it often implies learning decision boundaries. In essence DLF attempts to jointly estimate all the parameters of the competing discriminants all together to meet the performance requirements of a specific problem setting. This provides a new perspective in the recent push of Big Data initiatives especially in cases when the underlying distributions of the data are not completely known.

The key to DFL-based DT is that a decision function that determines the performance for a given training set is smoothly embedded in the objective functions so that their parameters can be learned by adjusting their current values to optimize the desired evaluation metrics in a direction guided by the feedback obtained from the current set of decision parameters. Some popular performance criteria include minimum classification error (MCE), minimum verification error (MVE), maximal figure-of-merit (MFoM), maximum average precision (MAP), and minimum area under the receiver operating characteristic curve (MAUC).

In theory the DFL-based algorithms asymptotically achieve the best performance almost surely for a given training set with their corresponding features, classifiers and verifiers without using the knowledge of the underlying competing distributions. In practice DFL offers a date-centric learning perspective and reduces the error rates by as much as 40% in many pattern recognition and verification problems, such as automatic speech recognition, speaker recognition, utterance verification, spoken language recognition, text categorization, and automatic image annotation, without the need to change the system architectures.


Prof. Dr.-Ing. Ulrich Heute
Professor at the Faculty of Engineering (TF),
Christian-Albrechts-University Kiel since 10 / 93 D-24143 Kiel, Germany

Title: DSP for Brain Signals

Abstract: All human activities originate from and lead to electrical events in the central nervous system. The corresponding currents may be recorded by the well established electro-encephalography (EEG) or the relatively new magneto-encephalography (MEG). Both yield complementary information. Especially, MEG signals � with existing superconducting sensors (�SQUID MEG�) or with new room temperature sensors developed in a large project at Kiel � deliver tiny signals. Their measurement in an unshielded surrounding needs sophisticated analog pick-up electronics, filtering disturbances as much as possible. However, a certain component may be disturbing in one application, but carry information in another one. So, dedicated removal of well-defined signal parts after digitization of a not too much pre-processed signal is preferable.By means of Digital Signal Processing (DSP), activities which are irrelevant for a given investigation,but especially different types of noise as well as strong endogeneous and exogenous artifacts can be removed. Within the above �large project�, algorithms for this task have been developed and applied: Noise originating from various sources (thermal noise, shot noise, Barkhausen noise) can be treated via linear (digital) filters, adaptive Wiener filtering, or Empirical Mode Decomposition (EMD). By EMD, also slowly varying offsets (�trends�) can be removed, as well as muscle artifacts. Muscle and eyemovement artifacts may be tackled via Independent-Component Analysis (ICA), combined with, again, simple filtering or, better, Kalman filtering. Also, artifact components after ICA may have to be �cleaned� from other activities by Wiener filtering before removal. Among external artifacts, the classical power-supply harmonics have to be dealt with. Simple notch or comb filters have disadvantages; a �hybrid filter�, designed signal-adaptively in frequency domain and applied in time domain, is the best, though expensive solution � also for the strong harmonic disturbances from a deep-brain stimulator. For certain artifacts, reference signals may be available, e, g. an ECG for heartbeat artifacts in EEG and MEG; then, a compensation after adaptive equalization is possible. The same holds for eye-blinking artifacts with an additional oculogram measurement.


Prof. Dong Hwa Kim
Professor at Dept. of Instrumentation and Control Engineering,
Hanbat National University,
16-1 Duckmyong dong Yuseong gu Daejeon, South Korea 305-719.

Title: Research Experience on Artificial Intelligence and Emotion Control, and Realistic Information Exchange System

Abstarct: First of all, this lecture presents research experience such as immune system, genetic algorithm, particle swarm optimization, bacterial foraging, and its hybrid system and application to real system. This lecture will also show research experience and results of emotion for emotion robot by AI. From research experience, immune system, PSO (Particle Swarm Optimization), BF (Bacteria Foraging), and hybrid system can have strong optimization function for engineering fields.

In detailed description, this lecture describes research background about immune network based intelligent algorithm, PSO based intelligent algorithm, bacteria foraging based intelligent algorithm, and the characteristic of novel algorithm fusioned by their algorithm. This one also illustrates motivation and background that these algorithms should be applied to in the industry's automatic system.

Second, this lecture illustrates immune algorithm and applied to various plant to investigate the characteristics and possibility of application. As the detailed description, immune algorithm will described by studied material to investigate possibility of application to plant. It suggests condition for disturbance rejection control in AVR of thermal power plant and introduce first into tuning method of its controller.

In the conventional genetic algorithm, it takes a long time to compute and could not include a variety of information of plant because of using sequential computing methods. That is some problem with making a artificial intelligence for optimization. In this lecture, by means of introducing clonal selection of immune algorithm into computing procedure, it will be showed advanced results. That is, it can be calculated simultaneously necessary information, transfer function, time constant, and etc., for plant operation condition. Therefore, computing time is about 30% shorter than that of the conventional genetic algorithm and 10.6% smaller in overshoot when it is applied to controller.

This lecture will introduce parameter estimation method by immune algorithm for obtaining model of induction motor. It will suggest immune algorithm based induction motor parameter estimation to obtain optimal value depending on load variation from these parameters.

Also, this lecture will introduce about intelligent system using GA-PSO. It will introduce Euclidean data distance to obtain fast global optimization not local optimization by means of using wide data and suggests novel hybrid system GA-PSO based intelligent tuning method that genetic algorithm and PSO (Particle Swarm Optimization) is fusioned.


Prof. Irene Y.H. Gu
Professor, Signal Processing Group,
Dept. of Signals and Systems, Chalmers University of Technology,
Gothenburg, 41296, Sweden

Title: Domain-Shift Object Tracking: manifold learning and tracking of large-size video objects with out-of-plane pose changes

Abstract:Many dynamic objects in videos contain out-of-plane pose changes accompanied by other deformation and long-term partial occlusions, and the size of objects could be large in images. In such scenarios, visual tracking using video from a single camera is challenging. It is desirable that trackers be performed on some smooth manifolds in such scenarios. Stochastic modeling on manifolds is also important for tracking robustness.

In this talk, domain-shift tracking and learning on smooth manifolds are addressed. First, we review some basic concepts of manifolds and some commonly-used manifold tracking methods. We then present a nonlinear dynamic model on a smooth (e.g. Grassmann, Riemannian) manifold, from which Bayesian formulae are built on the manifold, rather than in a single vector space as in the conventional cases. Based on the model, particle filters are employed on the manifold. We also consider domain-shift online learning with occlusion handling. While it is essential for learning dynamic objects including deformable out-of-plane motion for reducing tracking drift, one also needs to prevent the learning when changes are caused by other occluding objects or clutter. We show some examples of such online learning approaches. Finally, some demonstrations and evaluations from such a domain-shift tracker are shown, along with comparisons of results to several state-of-the-art methods.


Prof. Patrick Gaydecki
Professor, Sensing, Imaging and Signal Processing Group,
School of Electrical and Electronic Engineering, University of Manchester Manchester M60 1QD,
United Kingdom

Title:Intuitive Real-Times Platform for Audio Signal Processing and Musical Instrument Response Emulation.

Abstract:In recent years, the DSP group at the University of Manchester has developed a range of DSP platforms for real-time filtering and processing of acoustic signals. These include Signal Wizard 2.5, Signal Wizard 3 and Vsound. These incorporate processors operating at 100 million multiplication-accumulations per second (MMACs) for SW 2.5 and 600 MMACS for SW 3 and Vsound. SW 3 features six input and eight output analogue channels, digital input/output in the form of S/PDIF and a USB interface. For all devices, The software allows the user, with no knowledge of filter theory or programming, to design and run standard or completely arbitrary FIR, IIR and adaptive filters. Processing tasks are specified using the graphical icon based interface. In addition, the system has the capability to emulate in real-time linear system behavior such as sensors, instrument bodies, string vibrations, resonant spaces and electrical networks. Tests have confirmed a high degree of fidelity between the behavior of the physical system and its digitally emulated counterpart. In addition to the supplied software, the user may also program the system using a variety of commercial packages via the JTAG interface.


Dr. Karlheinz Brandenburg
Professor, Institut fuer Medientechnik, TU, Ilmenau PF 100565, 98684 Ilmenau, Helmholtzplatz 2
Fraunhofer- Institut Digitale Medientechnologie Ehrenbergstr. 31, 98693, Ilmenau, Germany.

Title:Audio and Acoustic Signal Processing: The quest for High Fidelity Continues.

Abstract: The dream of high fidelity continues since more than 100 years. In the last decades, signal processing has contributed many new solutions and a vast amount of additional knowledge to this field. These include simple solutions like matrix multichannel systems, Audio coding which changed the world of music distribution and listening habits active noise control, active modification of room acoustics, Search and recommendation technologies to find your favourite music and many more. So are there any problems left to be solved? Among others, I see two main research areas: Music Information Retrieval (MIR), helping us to find and organise music, or teaching playing musical instruments and Immersive technologies for movie theatres and eventually our homes, creating the illusion of being at some other place For such systems we use our knowledge about hearing, especially how ear and brain work together to form the sensation of sound. However, our knowledge about hearing, about psychoacoustics is still far from complete. In fact, just in the last few years we have learned a lot about what we don�t know.

The talk will touch on a number of the subjects above, explain some current work and its applications and finally talk about open research questions regarding psychoacoustics and the evaluation of audio quality.


Patrizio Campisi Ph.D.
Professor, Section of Applied Electronics,
Department of Engineering, Universit� degli Studi Roma TRE,
Via Vito Volterra 62, 00146 Roma, Italy

Title:Biometrics and Neuroscience: a marriage possible?

Abstract: In the recent years, biometric recognition, that is the automated recognition of individuals based on their behavioral and biological characteristics, has emerged as a convenient and possibly secure method for user authentication. In this talk we infer about the feasibility of using brain signals as a distinctive characteristic for automatic user recognition. Despite the broad interest in clinical applications, the use of brain signals sensed by means of electroencephalogram (EEG) has been only recently investigated by the scientific community as a biometric characteristic. Nevertheless, brain signals present some peculiarities, not shared by the most commonly used biometrics, like face, iris, and fingerprints, concerning privacy compliance, robustness against spoofing, possibility to perform continuous identification, intrinsic liveness detection, and universality, which make the use of brain signals appealing. However, many questions remain open and need a deeper investigation. Therefore in this talk, taking a holistic approach, we speculate about issues such as the level of EEG stability in time for the same user, the user discriminability that EEG signals can guarantee, and the relationship of these characteristics with the different elements of the employed acquisition protocol such as the stimulus, the electrodes displacement and number, etc. A detailed overview and a comparative analysis of the state of the art approaches will be given. Finally, the most challenging research issues on the design of EEG based biometric systems is outlined.


Prof. Philip James Wilkinson, Australia
President, International Union of Radio Science (URSI) Member URSI/COSPAR Working Group on International Reference Ionosphere

TitleURSI - what is its role in the 21st century?

Abstract: The heart of URSI (The International Union of Radio Science) is radio science, an enabling science that permeates society and is central to all technology. The founding body of URSI met in Belgium, in 1914, and the first URSI General Assembly took place in Belgium, in 1922. URSI joined the IRC (International Research Council (1919-1931) in 1922 and in 1931 the IRC became ICSU (now The International Council for Science) making URSI a founding scientific Union of ICSU. How relevant is such an historic body as URSI one hundred years after it was formed? This address will not answer that question directly, nor the equivalent question in the title for this talk. Instead, some of the ingredients for future success will be put forward, which includes a selection of the new science URSI scientists engage in as well as the changes URSI will promote in coming years.


Prof. Kazuya Kobayashi
Department of Electrical, Electronic, and Communication Engineering,
Chuo University, Tokyo, Japan President of the Japan National Committee of URSI

Title:Rigorous Radar Cross Section Analysis of a Finite Parallel-Plate Waveguide with Material Loading

Abstract: The analysis of electromagnetic scattering by open-ended metallic waveguide cavities is an important subject in the prediction and reduction of the radar cross section (RCS) of a target. This problem serves as a simple model of duct structures such as jet engine intakes of aircrafts and cracks occurring on surfaces of general complicated bodies. Some of the diffraction problems involving two- and three-dimensional cavities have been analyzed thus far based on high-frequency techniques and numerical methods. It appears, however, that the solutions due to these approaches are not uniformly valid for arbitrary dimensions of the cavity. Therefore it is desirable to overcome the drawbacks of the previous works to obtain solutions which are uniformly valid in arbitrary cavity dimensions. The Wiener-Hopf technique is known as a powerful, rigorous approach for analyzing scattering and diffraction problems involving canonical geometries. In this contribution, we shall consider a finite parallel-plate waveguide with four-layer material loading as a geometry that can form cavities, and analyze the plane wave diffraction rigorously using the Wiener-Hopf technique. Both E and H polarizations are considered.

Introducing the Fourier transform of the scattered field and applying boundary conditions in the transform domain, the problem is formulated in terms of the simultaneous Wiener-Hopf equations. The Wiener-Hopf equations are solved via the factorization and decomposition procedure leading to the exact solution. However, this solution is formal since infinite series with unknown coefficients and infinite branch-cut integrals with unknown integrands are involved. For the infinite series with unknown coefficients, we shall derive approximate expressions by taking into account the edge condition. For the branch-cut integrals with unknown integrands, we assume that the waveguide length is large compared with the wavelength and apply a rigorous asymptotics. This procedure yields high-frequency asymptotic expressions of the branch-cut integrals. Based on these results, an approximate solution of the Wiener-Hopf equations, efficient for numerical computation, is explicitly derived, which involves a numerical solution of appropriate matrix equations. The scattered field in the real space is evaluated by taking the inverse Fourier transform and applying the saddle point method. Representative numerical examples of the RCS are shown for various physical parameters, and the far field scattering characteristics of the waveguide are discussed in detail. The results presented here are valid over a broad frequency range and can be used as a reference solution for validating other analysis methods such as high-frequency techniques and numerical methods.


Prof. Dr. Sneh Anand
Centre for Biomedical Engineering
Indian Institute of Technology Delhi

Title:Intelligent Real Time Biological Signal Processor

Abstract: Human brain is a unique Ideal intelligent signal processor in the world. In the human brain, the salience activities operate at supernatural level. The mother board and the CPU are intervened at subcellular and physiological levels.

The delusion to think that everything outside is a volume of space �not you and outside of you.� The fact is that, sense of presence that is you �are everywhere.� The main reason that you have more awareness of being in a body is simply because of the multi-sensory intelligence of the body commands. We have the illusion that our human bodies are solid, but they are over 99.99% empty space. Input signals operate at emotional, environment and attention levels, besides the multiple physical electromagnetic chemical mechanical and microbiological structural changes.

Living system is complex that intelligently coordinates the communication channels at atomic and subatomic level between body brain and mind, Natural environment plays very vital role in programming the millions upon millions of processes occurring in the body at quantum physical level. The human system transforms itself. Ancient physicians were physicists that are adapted in modern medicine and developed medical technologies. Biological sensors operate at algorithms that are different in all species however the human brain networks are most complex self-programmed processors.