Speaker:
Prof. Magdy A. Bayoumi, Director, Center of Advanced Computer Studies
University of Louisiana at Lafayette, USA
Title: Brain on Silicon
Abstract:
The brain has been always a mystery for humanity to figure out, the main question has been: can we read the brain? It may be a far fetched goal, but the road to solve has been fascinating. Brain Computer/Machine Interface (BCI/BMI) is one of the enabling technologies to reach this ultimate goal. BCI/BMI has a great potential for solving many physically challenged people's problems (e.g., restoring missing limb functionality) via neural- controlled implants. We have designed and developed a BCI chip that overcome the main challenges of low bandwidth communication, small chip area, low power, low heat dissipation, and tolerant to noise. The chip is adaptive and has simple architecture and circuits. The power consumption is reduced, but, and the accuracy of the system has improved up to 93.5% in the worst case. Depending on the application needs (Limb control application on mental disorder monitoring and detection), the proposed architecture could be use in an invasive closed-wound implant as well as a minimal invasive Implants. The proposed architecture was simulated in Matlab and implemented in Verilog, Modalism and Cadence. A case study of early prediction/warning and detection of epilepsy seisure will be illustrated.
Speaker:
Prof. Alfredo Rosado Munoz, ETSE, GPDD,
Department of Ingenieria Electronica, Universidad de Valencia, SPAIN
Title: Hardware architectures for real-time signal processing computations
Abstract:
With growing computation demands in real-time classification, prediction and signal processing in general, algorithms become more complex while low power and small size devices are required for real-time architectures. Matrix operations are common for such computation algorithms and, in numerous algorithms, it is also required to perform matrix operations repeatedly, where the result of an operation is further operated again. On the other hand, FPGA devices are very common in real-time hardware implementations. These devices contain specific blocks devoted for signal processing algorithms (DSP-MAC units, distributed RAM, etc.). It is not a common approach to suit algorithms to existing device resources, but doing this allows to optimize resources and performance. With this view, we will describe different architectures and topologies well suited for real time implementation: a universal neural network computation architecture, neural network on-chip training algorithm(ELM and OS-ELM), matrix computations (addition, subtraction, dot product, multiplication, inversion), restricted Boltzmann Machines (RBM), and spike-based bioinspired processing systems). We show the hardware architectures and examples showing the high capacity of the proposal, allowing up to 2000 neurons neural network. implementations, or 1000x1000 matrix operations working at clock rates higher than 250MHz.
Speaker:
Prof. David A. Clifton,
Group Leader - Computational Health Informatics (CHI) Laboratory,
Fellow of Balliol College, Oxford University,UK
Title: Signal processing for the next generation of health informatics.
Abstract:
Oxford is at the forefront of developing intelligent healthcare systems based on machine learning, including the world's first FDA-approved physiological monitoring systems based on machine learning, and research outputs that are now used to care for over 20,000 patients every month in the NHS. This talk introduces some of the methods and applications being developed at the Computational Health Informatics (CHI) Lab in the University of Oxford, which exploit "big data" approaches to machine learning that can obtain clinically-actionable information from fusing heterogeneous sources - including wearable sensors, electronic health records, and genomic/proteomic biomarkers.
Speaker:
Prof. Jae Hong Lee,
Dept. of Electrical and Computer Engineering,
Seoul National University, Seoul, Korea
Title: Cognitive Radio for Wireless Communications: Concepts and Applications
Abstract:
To meet rapidly growing traffic demands and accommodate large number of devices, more radio spectrum is needed for future wireless communications. Considering the scarcity of radio spectrum, it is needed to enhance the utilization of radio spectrum licensed exclusively to specific users. In cognitive radio, an unlicensed user, called a secondary user, is permitted to access the spectrum allocated to a licensed user, called a primary user. When the primary and secondary users transmit their signals simultaneously, interference occurs at both users which degrades their performance. Interference at the primary user can be avoided by spectrum sensing technique which prohibits a secondary user from transmitting its signal when it detects a primary user’s signal. Also, interference level at the primary user can be limited below a certain threshold by spectrum sharing technique in which the secondary user adjusts its transmit power accordingly. Some recent results on cognitive radio are introduced, and its applications and future research subjects are shown.
Speaker:
Prof. King Ngi Ngan,
Chair Professor,
Department of Electronic Engineering, The Chinese University of Hong Kong,
Hong Kong
Title: 3D Morphable Model and its Applications
Abstract:
In this talk, the research work on 3D morphable model and its applications conducted in the Image and Visual Processing (IVP) Laboratory of the Chinese University of Hong Kong (CUHK) is discussed. The 3D morphable model is introduced with respect to the works carried for face and body reconstruction. It applications to head pose tracking,facial expression tracking,face reconstruction using a single color image, and human body reconstruction are explored. Demonstrations showing the results obtained are displayed in video. Lastly some future directions will be outlined.
Speaker:
Prof. Rongqing Hui,
Professor,
Department of Electrical Engineering and Computer Science, University of Kansas, USA
Title: Digital subcarrier multiplexing for optical transmission and cross-connect switching
Abstract:
With the rapid advance in high speed CMOS electronics, digital signal processing (DSP) becomes more and
more popular in optical communication systems and networks. DSP is providing improved performance and
flexibility in optical systems, and performing many functionalities previously considered only feasible in the
optical domain.
This presentation will discuss digital subcarrier multiplexing (DSCM) and its application in optical
transmission and cross-connect switching. DSCM is a frequency domain multiplexing technique partitioning
high speed data traffic into multiple digitally-generated orthogonal subcarriers without the need of spectral
guard band between subcarriers. DSCM has the potential to provide high bandwidth efficiency, sub-
wavelength spectral granularity for flexible circuit-based switching and interconnection, as well as the
capability of electronic compensation of various transmission impairments. Digital-analog hybrid subcarrier
multiplexing technique will also be which allows the increase of per-wavelength data rate while using
relatively low-speed digital electronics.
Speaker:
Prof. R. Mamlook,
Department of Electrical and Computer Engineering, Dhofar University, Oman
Title: Controlling Future Intelligent Smart Homes using Wireless Integrated Network Systems
Abstract:
Intelligent homes are in demand these days for providing comfort and safety. Such homes can be monitored remotely and controlled autonomously. The main objective is to actively save electricity consumption and have control of appliance operations. It is only by employing smart systems as well as by using different renewable sources that we can attain the efficiency of an eco-friendly home. Wireless Integrated Networked Sensors with cloud based data storage capability can be employed now with advent of Internet of Things (IoT). Operating systems running on cloud based processors are connected to simultaneous networks (Wi-Fi and GSM) able to report and control devices in a home. A prototype is proposed for a multi-home-appliances monitoring system. It is based on a set of microcontrollers which are connected to web servers for receiving device statuses and control signals. By Integration of pervasive computing devices with the power of web servers, we aim for the development of a powerful analysis tool in the area of Clean Energy, Device Safety, Power Conservation and Power Quality. The system proposed can be utilized in the areas of Energy Auditing, Electrical safety, power management, power quality, urban planning and personal monitoring.
Speaker:
Prof. Yusuke Tahara,
Graduate School of Information Science and Electrical Engineering
Kyushu University, Japan
Title: Taste Sensor with Lipid/polymer Membranes
Abstract:
The human taste receptors do not necessarily recognize individual chemical substances. Each of the receptors for the five basic taste qualities (saltiness, sweetness, bitterness, sourness, umami) simultaneously receives multiple chemical substances. It means the human gustatory receptors have a semi-selective property or global selectivity. Sensory evaluation, in which experienced evaluators called sensory panelists actually taste samples to evaluate them,has been made to estimate the tastes of samples so far. This method has several problems such as low objectivity, low reproducibility, the stress possibly imposed on panelists and the significant cost of selecting and training panelists. It is difficult to carry out sensory evaluations because of the potential for medication side effects in the medical and pharmaceutical field.The presentation will focus on the recent developments in a“taste sensor”, i.e., electronic tong with global sensitivity based on membrane potential changes of lipid/polymer membranesfor taste evaluation of foods, beverages and pharmaceuticals.The taste sensor hasbeen developed to realize a sensor that responds to taste chemical substances and can be used to quantify the type of taste focusing on the fact that humans discriminate the taste of foods and beverages on the tongue on the basis of the five basic tastes.
Speaker:
Dr. Javier Barria,
Reader, Department of Electrical and Electronic Engineering
Imperial College, London, UK
Title: On-line time series classification with application to anomaly detection
Abstract:
In this talk recent research output on a domain-independent temporal data representation framework will be highlighted [1]. The novel data representation framework, named Structural Generative Descriptions (SGDs), is based on a novel data representation strategy that combines structural and statistical pattern recognition approaches, the key idea being to move the structural pattern recognition problem to the probability domain. The framework consists of three tasks: a) decomposing input temporal patterns into sub-patterns in time or any other transformed domain, b) mapping these sub-patterns into the probability domain to find attributes of elemental probability sub-patterns called primitives, and c) mining the input temporal patterns according to the attributes of their corresponding probability domain sub-patterns. Two off-line and two on-line algorithmic instantiations of the proposed SGD framework will be briefly highlighted. The empirical evaluation of the proposed SGD-based algorithms will be summarised in the context of time series classification for off-line algorithms, and in the context of change detection for on-line algorithms. The talk will also highlight real world applications, where the intrinsic domain independent nature of the proposed SGD framework can be used: i) biometric recognition and forensics, ii) smart infrastructures monitoring, iii) machine/motor health conditioning monitoring, iv) transportation networks monitoring and, v) environmental (pollution) monitoring.
Speaker:
Prof. Brian Barsky,
Computer Science Division School of Optometry,
Berkeley Center for New Media,
Berkeley Institute of Design Arts Research Center,
Berkeley University of California, Berkeley, USA
Title: From Vision-Realistic Rendering to Vision Correcting Displays
Abstract:
Present research on simulating human vision and on vision correcting displays that compensate for the optical
aberrations in the viewer's eyes will be discussed. The simulation is not an abstract model but incorporates real
measurements of a particular individual’s entire optical system. In its simplest form, these measurements can
be the individual's eyeglasses prescription; beyond that, more detailed measurements can be obtained using an
instrument that captures the individual's wavefront aberrations. Using these measurements, synthetics images
are generated. This process modifies input images to simulate the appearance of the scene for the individual.
Examples will be shown of simulations using data measured from individuals with high myopia (near-
sightedness), astigmatism, and keratoconus, as well as simulations based on measurements obtained before and
after corneal refractive (LASIK) surgery.
Recent work on vision-correcting displays will also be discussed. Given the measurements of the optical
aberrations of a user’s eye, a vision correcting display will present a transformed image that when viewed by
this individual will appear in sharp focus. This could impact computer monitors, laptops, tablets, and mobile
phones. Vision correction could be provided in some cases where spectacles are ineffective. One of the
potential applications of possible interest is a heads-up display that would enable a driver or pilot to read the
instruments and gauges with his or her lens still focused for the far distance.
Speaker:Prof. Ashfaq Khokhar,
Professor and Chair, Department of Electrical and Computer Engineering (ECE),
Illinois Institute of Technology (IIT), Chicago, USA
Title: Content-Aware Instantly Decodable Network Coding (IDNC) in D2D Networks
Abstract:
Device-to-device (D2D) communication facilitates direct communication among smart devices without involving cellular infrastructure. Realizing its importance in emerging applications, D2D communication has also been provisioned in 5G specifications. In this talk, we explore Content-Aware IDNC for cooperative communication among cooperating mobile devices by taking into account the realistic constraints, such as strict deadline, bandwidth, or limited energy. Content-Aware IDNC exploits additional information about content, particularly when not all packets have the same importance and not all users are interested in the same quality of content. Content-Aware IDNC improves quality and network coding opportunities jointly by taking into account importance of each packet towards the desired quality of service (QoS). We will present different formulations of this problem and explore possible solutions. This work has been jointly pursued with: YasamanKeshtkarjahromi, HulyaSeferoglu, and Rashid Ansari (all at University of Illinois, Chicago) .
Speaker:Dr. Roland Petrasch - Professor,
Beuth University of Applied Sciences Berlin - University of Applied Sciences,
Berlin, Germany
Title: From Computer & Communication Networks to Software-Defined Infrastructures and Smart Cloud Applications
Abstract:
Software and Platforms as a Service (SaaS, PaaS) in the era of Software-Defined Networking (SDN) and Cloud Computing (CC) found its way into practice. But what comes next? This talk discusses aspects of the so-called Software-Defined Infrastructures (SDI) and the new idea of Smart Cloud Applications (SCA) where a holistic approach leads to intelligent software-supported business processes with access to an integrated data base connecting raw IoT data with ERP/CRM data models in a Cloud Storage Hub (CSH) that are managed by Artificial Intelligence (AI) and Knowledge Management (KM) components.
Speaker:Prof. Zoran Ivanovski,
Faculty of Electrical Engineering and Information Technologies,
Ss. Cyril and Methodius University in Skopje Republic of Macedonia
Title: ERROR SPOTTING: New Approach Towards Robust Super-Resolution and beyond
Abstract: Super-resolution (SR) algorithms are known to be very sensitive to errors in registration of the low resolution images, as well as presence of outliers. These registration errors and outliers introduce unpleasing to very annoying artifacts in the super-resolved image, rendering the SR procedure useless for practical purposes. The talk will focus on a novel idea for effective super-resolution robust to errors in the registration process. The main idea of the approach is to allow the SR process to introduce artifacts due to registration errors, to detect the locations of the artifacts and to efficiently suppress the cause for their appearance in the final SR procedure. The approach relies on efficient feature extraction and machine learning based artifacts detection. The idea is further developed and applied to the problem of accurate subpixel motion estimation for Super- resolution. The objective is to improve the quality of the SR image by increasing the accuracy of the motion vectors used in the SR procedure. This increased accuracy of the motion vectors is achieved based on visual appearance of error artifacts in the SR image, introduced due to registration errors. First, SR is performed using full pixel accuracy motion vectors obtained using any appropriate motion estimation algorithm. Then, a machine learning based method is applied on the resulting SR image in order to detect and classify artifacts introduced due to missing subpixel components of the motion vectors. The outcome of the classification is a subpixel component of the motion vector. In the final step, SR process is repeated using the corrected motion vectors.
Speaker:Prof. Waleed H. Abdulla,
Deputy Head of Department (Research),
Department of Electrical and Computer Engineering,
The University of Auckland, New Zealand
Title: Human Biometrics: Value and Future
Abstract:
The 2001 MIT Technology Review indicated that biometrics is one of the emerging technologies that will
change the world. Human Biometric is automated recognition of a person using adherent distinctive
physiological and/or involuntary behavioral features. Physiological features include facial characteristics,
fingerprints, palm prints, iris patterns, and many more. Examples of behavioral features are signature, writing
dynamics, gait, voice, and keyboard typing dynamics. The valid features used in any biometric approach must
be quantifiable, robust and distinctive. Biometrics technology is initially treated as an exotic topic while
recently it is a fast growing industry due to the urgent needs to secure people properties from goods to
information.
Biometric recognition comprises authentication and identification. However, the main need is for
authentication. Authentication can be pursued by: Something You Own, Something You Memorize,
and Something You Carry. Biometric is something that everyone owns and available at all times with
the person, thus it will prevail. Despite all the advantages biometrics recognition facilitates, not all
people support the use of biometrics. Biometrics proliferation for recognizing people raised concerns
from the civil rights advocates. The issue of privacy is the big concern and where the privacy and
security should meet. Also, the compromise of the biometrics data is one of the main concerns; what if
someone manage to unlawfully attain the data? In this talk I will introduce the human biometrics topic
and discuss several issues surrounding its proliferation and its future.
Speaker:Prof. Sri Krishnan,
Associate Dean, Canada Research Chair,
Ryerson University, Canada
Title: Connected HealthCare and Inspiring Opportunities
Abstract:
This talk will provide a general overview of connected healthcare with some specific examples of research undertaken at Signal Analysis Research (SAR) group at Ryerson University, Canada. Connected Healthcare harnesses the power of sensors, information and communications technology, signal and data processing, analytics and machine learning for informed and better decision making in healthcare. With the emergence of Internet of Things (IoT), wearable and wireless sensing, real-time machine learning algorithms connected healthcare is expected to make a significant impact in the day to day lives of many people. It also paves way for tele-medicine and mobile health applications.
Speaker:Prof. Dr. Muhammet KÖKSAL
Chairman, Electrical and Electronics Engineering,
İstanbul Gelişim University, Turkey
Speaker:Anca Ralescu, Ph.D.
Professor,
School of Computing Sciences &Infomatics,
College of Engineering & Applied Sciences,
University of Cincinnati, USA
Speaker:Prof. António Dourado,
University of Coimbra, Portugal, European Union
Title: Epileptic Seizures Prediction by EEG Signal Processing: Progresses and Challenges
Abstract:
Epilepsy is a neurological disease affecting about 1% of the population, anywhere, at any age. About one third of these patients are insensitive to drug treatment or brain surgery, and these unfortunate people must leave daily with the possibility to have a seizure at any time. This fact imposes severe constrains and limitations to their lives. The possibility of seizures prediction by processing of the multichannel EEG signals has been and is the subject of extensive research worldwide. Many progresses have been reported, but serious challenges remain in order to have prediction algorithms with clinical acceptance, allowing the development of transportable devices capable of alarming the patient when a seizure is coming. Computational intelligence and machine learning are viewed nowadays as techniques that may contribute to good predictors. A review of these techniques will be made and the published results critically analyzed, namely those resulting from of the European FP7 Project EPILEPSIAE. Artificial Neural Networks and Support Vector Machines have been the main techniques extensively developed and tested for a sample of 275 patients of the European Epilepsy Database. The present challenges will be discussed, concerning algorithms, biosignals, and transportable devices. Due to the rapid development of powerful miniaturized hardware, it is expectable that substantial progresses will be made in the near future.
Speaker:Carlos M. Travieso-Gonzalez
Vice-Dean,University of Las Palmas de Gran Canaria
Institute for Technological Development and Innovation in Communications (IDeTIC)
Signals and Communications Department, Campus Universitario de Tafira, s/n
Pabelln B - Despacho 111, 35017 - Las Palmas de Gran Canaria, SPAIN
Title: Diagnostic aid system for neurodegenerative diseases
Abstract:
The development of tools to aid diagnosis has been an important element in recent years to enhance research between technology and medicine. This has greatly facilitated the work of medical doctors and has made it possible to find greater efficiency in the health services of any country. Among the different medical specialties existing, is the Neurology. In particular, there has been collaboration during several years with the doctors of the “Dr. Negrín” University Hospital in Gran Canaria (Spain), in order to develop a non-invasive tool to obtain information about patients and to observe the evolution of the disease in them. We have worked with patients with Alzheimer's to observe the characteristic variations in loss of emotions. This diagnosis is one of the elements that doctors usually observe and allows them to have an opinion about their evolution. Therefore, a tool has been developed that captures this information (level of emotions or aurosal) with a webcam and informs the doctor in real time about the state of the patient's emotion. This tool can be used with and without the presence of the doctor and helps to decongest the patient lists, obtaining information from each patient under the recording protocol proposed by the medical doctor.
Speaker:Prof. Sotiris Skevoulis, Phd
Professor Of Computer Science
Seidenberg School Of CSIS,
Pace University, USA
Title: Engineering Successful Partnerships Between Academia and Industry: Offering Customized Educational Programs to Industrial Partners
Abstract:
The talk focuses on the opportunities and the challenges of such programs. It also suggests solutions to common problems found in developing and sustaining educational partnerships between Academia and Industry. In the past many such projects have started with great enthusiasm and support from both partners but soon they fainted and finally dissolved. The talk reflects on the strong and weak points of such partnerships. The goal is to ensure that the academic programs offered match the actual needs of the industrial partners by providing cutting edge technology transfer and customized training for their employees. A specific case study will be presented and analyzed, highlighting the key points of the keynote talk.
Speaker:
Prof. Hsing Luh
National Chengchi University, Taipei
Title: Computational Models for Priority Multi-Server Queues of Impatient Customers
Abstract:
Considering high-priority and low-priority impatient customers, weconstruct a computational model for a Markovian system of multi-server queues. After an exponentially distributed duration with distinct rates in the system, both customers may abandon service and no return. In this presentation, the service time is assumed exponentially distributedfor all customers and service disciplines may be considered as First Come FirstServed (FCFS) or Last Come First Served (LCFS). By deriving the Laplacetransforms of the defined random variables and matrix geometric method with a direct truncation, we obtain an approximation to the stationarydistribution. We calculate the expected waiting time for both classesof customers. Given service and abandonment for each class of customers, we derive performance measures related to stationary probability distributions and conditional waiting times.
Speaker:Yi Qian,
Professor
Department of Electrical and Computer Engineering
University of Nebraska-Lincoln
Title: Big Data, Cloud Computing and Smart Grid Communications
Abstract:
In this talk, we explore a big data driven and cloud computing based information and communication technologies (ICT) framework for smart grid. Through the proposed framework, pricing forecast can be made to customers and energy forecast can be made to power generators. In addition, real-time monitoring and modern controlling can be applied to smart grid to prevent system failure or blackout. Advanced controlling is achieved by information gathered internally from smart grid through private networks deployed by utility companies, and external information from public sources through internet. Big data analytics is introduced so that useful information can be mined from collected data. Cloud computing is applied to perform big data analytics in the proposed ICT framework.
Speaker:Jorma Skyttä, Professor,
Electrical engineering, Department of Signal Processing and Acoustics,
Aalto University, Finland
Title: Biometric feature detection from surveillance data using non-calibrated techniques
Abstract:
The development of high quality video surveillance systems provides the ability to perform measurements from image data. This paper describes and demonstrates the extraction of basic human biometric features from surveillance video camera data using an uncalibrated single-view surveillance camera system. Perspective-based photogrammetric techniques are applied in situations where only minimal real world information is available. This information usually includes the reference height and two orthogonal sets of parallel lines on a reference plane. These all must be detectable both in the real world and be in the field-of-view of the camera. Using this type of setup, the orthogonal distance from the reference plane can be computed in any scene point. Measured biometric features include height and body dimensions. In addition the gait features and walking profile can be estimated when photogrammetry techniques are extended from single image to video stream. Such information can be used for forensic investigation and in various types of other applications where photogrammetric information is needed. This can be done without further information about the camera calibration or position.
Speaker:Dr. Mehmet Emir KOKSAL,
Associate Professor ,
Ondukuz Mayis University, Turkey
Title: Signal Processing through Cascaded Networks and Commutativity
Abstract:
Many of the signal processor systems are composed of successive treatment of signals by a chain connection of subsystems each of which is used to perform some part of a complete process. The sequence of processing is important to achieve the desired aim. It is determined by the system designer considering specific method of synthesis, environmental conditions, cost, durability, reliability, sensitivity, robustness, stability and many other engineering features of the processing device. In case of time-invariant subsystems, the order of them can be changed without affecting main functioning while achieving better performance characteristics. In case of time-varying case one cannot change the order of interconnection of two time-varying subsystems to arrive a better performance unless these sub systems are commutative. Otherwise the complete system losses its function and become useless. In this talk, after giving the general commutativity conditions and their applications to well-known second-order systems, results of numerical solutions of some commutative pairs are presented to emphasize role of commutativity to improve the performance characteristics.
Speaker:Professor A.H. Sadka,
Director of Centre for Media Communications Research,
Brunel University, London, England, UK
Title: Comparative performance evaluation of HEVC under error-free and error-prone conditions
Abstract:
This research study presents a comparative analysis of the two most recent video coding standards, namely High Efficiency Video Coding (H.265|HEVC) and Advanced Video Coding (H.264|AVC). The experimental work is conducted on different video test sequences with various spatial resolutions and in an error free setting. In this work, the encoding efficiency of the HEVC algorithm is compared against that of H.264/AVC using the same encoding parameters for both standards in order to ensure fairness of comparison. The reference software used for H.265|HEVC codec is the HEVC Test Model (HM) Version 16.7, and for H.264|AVC the Joint Model (JM) version 19. The two codecs are compared using the objective quality metric (Y-PSNR). 200 frames of each sequence are tested, and the tests are carried out on computers with CPU Intel Xeon E3-1246 V3 @3.5 GHz, 8GB RAM, and Operating System Windows 7 Enterprise SP1- 64 bit;thecompilationof both codecs source codes is done using Microsoft Visual Studio 2013-64 bits. The obtained results show that H.265|HEVC encoder can achieve the same subjective quality as H.264|AVC encoder with 40%reduction in output bit rate. However, the Y-PSNR results show that the bit-rate saving in H.265|HEVC is decreased to around 15% for low spatial resolutions(176×144)and (352×288) video sequences. Furthermore, this study analyses and compares the error sensitivity (with noerror resilience)of both codecs. The bit error patterns are generated using network simulator version 3 (NS3), and injected in withloss rates ranging [0 -14]%. .
Speaker:Prof. K. Gopalan,
Department of Electrical and Computer Engineering,
Purdue University Northwest, Hammond, USA
Title: Audio Steganography and Watermarking for Information Hiding
Abstract:
Information hiding and steganography are concerned with embedding information in a media (cover) signal in an imperceptible manner. Applications of steganography include watermarking for copyright protection and authentication, data hiding for secure storage and transmission, and covert communication using unclassified channels. Indiscernible hiding of information in an audio signal is more challenging than invisible modification of an image or video signal due to the wide dynamic range of human audibility in frequency and power level. In spite of this challenge, human auditory system imperfections, which lead to psychoacoustic masking effects in hearing and perception, can be exploited for unnoticeable modifying of a cover audio signal in accordance with a given piece of covert information. Since the modification is carried out in the masked regions of perceptibility, the information-embedded audio (stego) signal appears to be the same as the original signal in spectrogram and perceptual quality. Successful embedding depends, among others, on the discernibility of any difference between the original cover signal and the stego signal, robustness of the hidden information to noise, and recovery key that does not require the original cover audio signal. This talk will provide an overview of psychoacoustic masking-based audio steganography with an emphasis on the newly developed tone insertion techniques in the spectral and cepstral domains, and their extension to image embedding. Robustness of hidden data to noise and attacks, and quantitative measures for perceptual difference will also be discussed. A related waveform-domain steganography that indirectly relies on auditory masking property will be presented with its extension to hiding data on images.
Speaker:Professor Pier Luigi Dragotti,
Department of Electrical and Electronic Engineering,
Imperial College of London, UK
Speaker:Dr M. R. Swash,
Department of Electronic and Computer Engineering,
Brunel University London,UK
Title: Scalable Video Acquisition and Visualisation
Abstract: Imaging systems are widely utilised almost in all applications of entertainment, health, security and robotics. Due to explosive growth in end user devices of different sizes, resolutions and types, there is a great need of a scalable video imaging solution which serves the new generation of visualisation systems. This talks focuses on scalable video acquisition and visualisation, in particular Holoscopic 3D imaging system which is a true 3D imaging principle and offers a scalable digital images in both 2D and 3D image formats.
Speaker:Prof. Peter Bauer,
Department of Electrical Engineering,
College of Engineering, University of Notre Dame, USA
Title: A Set-Membership Method for Electric Vehicle Parameter Estimation
Abstract:
A method for in-vehicle parameter estimation is introduced for EVs using set-membership estimation techniques. The goal is to estimate three key parameters that profoundly influence vehiclerange and health: wind speed, rolling resistance and overall powertrain efficiency. Especially the environmentally influenced parameters such as wind speed and rolling resistance are often highly uncertain and hard to measure directly, and thus are prime candidates for set membership estimationtechniques. Overall powertrain efficiency from power source to wheelsis also a quantity that is usual not easy to determine. In addition it is a key quantity that determines vehicle health. The paper also shows how consensus techniques in set membership estimation can provideuseful information for determining parameters that are shared by vehicles. The findings are illustrated using drive cycle simulations using the FTP75 urban cycle.
Speaker:Dr. Kush Varshney,
Mathematical Sciences Department, IBM T. J. Watson Research Center, USA.
Title: On Making Machine Learning Safe
Abstract:
Machine learning algorithms are increasingly influencing our decisions and interacting with us in all parts of our daily lives. Therefore, just like for chemical plants, roads, vehicles, and myriad other systems, we must ensure that systems involving machine learning are safe. In this talk, we first discuss the definition of safety in terms of risk, epistemic uncertainty, and the harm incurred by unwanted outcomes. Then we examine dimensions along which certain real-world applications may not be completely amenable to the foundational principle of modern statistical machine learning: empirical risk minimization. In particular, we note an emerging dichotomy of applications: ones in which safety is important and risk minimization is not the complete story (we name these Type A applications), and ones in which safety is not so critical and risk minimization is sufficient (we name these Type B applications). Then, we discuss how four different strategies for achieving safety in engineering (inherently safe design, safety reserves, safe fail, and procedural safeguards) can be mapped to the machine learning context through interpretability and causality of predictive models, objectives beyond expected prediction accuracy, human involvement for labeling difficult or rare examples, and user experience design of software. Finally, we detail principled formulations for learning Boolean rule-based classifiers based on compressed sensing that are interpretable and therefore provide inherently safe design.
Speaker:Prof. Hui-Huang Hsu
Department of Computer Science and Information Engineering.
Tamkang University, Taiwan.
Title: Recognizing Human Behavior via Smartphone Sensory Data
Abstract: Understanding the movement or even behavior of human requires various kinds of sensory data from wearable devices or environmental sensors. Nowadays, smartphones are equipped with sensors that can serve such a purpose. The most important thing is that people carry a smartphone most of the time. Therefore, comparing to other type of sensors, the smartphone seems to be an unobtrusive sensing device for the user. In this talk, we will first introduce general concepts. We will then discuss some of the possibilities. Results of selected research projects will also be presented.
Speaker:Prof. Alexander Kurganov
Tulane University,
LA, USA
Title: Central Schemes: A Powerful Black-Box Solver for Nonlinear Hyperbolic PDES
Abstract: Nonlinear hyperbolic PDEs arise in modeling a wide range of phenomena in physical, astrophysical, geophysical, meteorological, biological, chemical, financial, social and other scientific areas. Being equipped with efficient, accurate and robust numerical methods is absolutely necessary to make substantial progress in all of those fields of research. This talk will be focused on non-oscillatory central schemes, which can be used as a high-quality black-box solver for general hyperbolic systems of conservation laws. I will first briefly show their derivation, discuss some current developments, and then present several recent applications including modern network models of traffic and pedestrian flows, gas pipes, and dendritic river systems.
Speaker
Prof. Hsi-Pin Ma
Department of Electrical Engineering,
National Tsing Hua University, Taiwan 30013
Title: Wireless Healthcare: Electronics and Systems
Abstract: Helath is an emerging topic for healthcare systems. Based on the infrastructure of the mobile communications systems, the healthcare system can provide much more services. In this talk, a platform for mobile healthcare system is presented. The ECG/respiration monitoring is just an example. We can use the low complexity low power wireless sensor node to record the ECG signal and continuously monitor for a long time. By mobile phone, we can use the existing 3G/WiFi network to send back the recorded ECG signals for further analysis. There is no extra deployment cost for the whole infrastructure. We also proposed some algorithms for ECG signal analysis implemented both on the mobile phones and cloud. This can let doctors have more options to provide medical services. With the possibilities of wearable sensing techniques, we can also extend the techniques to the life style applications or interdisciplinary collaborations. Some demos will be presented within the talk.
Speaker:Prof. Peter Puschner
Technische Universitaet Wien
Real-Time Systems Group
Vienna, Austria
Title: Time-Composable Network Interfaces
Abstract:Composing networked computer systems of highly autonomous components may lead to control conflicts in the communication system. These control conflicts can be avoided by connecting the components via temporal-firewall interfaces in combination with a time-triggered communication network. We will show the benefits of this communication strategy and discuss the two access strategies (asynchronous respectively time-synchronized) for communicating via temporal-firewall network interfaces.
Speaker:
Prof. Marius Pedersen
Director of The Norwegian Colour and Visual Computing Laboratory,
Faculty of Computer Science and Media Technology,
Gjovik University College, Norway
Title: Towards a Perceptual Image Quality Metric
Abstract:The evaluation of image quality is a field of research that has gained attention for many decades. Since subjective evaluation of image quality is time-consuming and resource demanding, there is increasing effort to obtain a objective image quality metric capable of predicting perceived image quality. In this talk we give an overview of existing image quality metrics, and the advancements in the field towards obtaining a perceptual image quality metric. We focus specifically on image quality metrics that simulate the human visual system, and how well they are able to predict perceived image quality.
Speaker:
Prof. Henrik Hautop Lund,
Professor Center for Playware,
Technical University of Denmark
Title: Playful Modular Technology – Play with Networks of Radio Communicating Interactive Modules
Abstract:With recent technology development, we become able to exploit robotics and modern artificial intelligence (AI) to create playware in the form of intelligent hardware and software that creates play and playful experiences for users of all ages. Such playware technology acts as a play force which inspires and motivates you to enter into a play dynamics, in which you forget about time and place, and simultaneously become highly creative and increase your skills - cognitive, physical, and social skills. The Playware ABC concept will allow you to develop life-changing solutions for anybody, anywhere, anytime through building bodies and brains to allow people to construct, combine and create. Two decades of scientific studies of such playware in the form of playful robotics, LEGO robots for kids, minimal robot systems, user-friendly, behavior-based, biomimetic, modular robotics lead Prof. Lund’s students to form the Universal Robots company, which disrupted the industrial robotics sector, and recently was sold for 285 million USD. Another example of exploiting the playful and user-friendly technology development is the modular interactive tiles system, Moto tiles (www.mototiles.com), which is designed as an alternative form of physical rehabilitation exercise to allow elderly citizens and patients to break away from monotonous training programs, and participate in an exercise that is fun and exciting, and therefore more motivating. Qualitative studies show that therapists and elderly find the training with modular interactive tiles fun and highly motivating. Further, scientific studies have shown that training with the modular tiles has a large effect on the functional abilities of the elderly. The tests of effect show that training with the Moto tiles provides improvements on a broad range of abilities including mobility, agility, balancing, strength and endurance. The playful training improves the abilities of the elderly in many areas of high importance for activities of daily living, in contrast to several other forms of training and exercise, which typically only improve a subpart of these abilities. It is shown that playful training give significant effects with substantially less training sessions than what is needed with more traditional training methods.
Speaker:
Prof. Raimo Kantola
Dept of Communications and Networking
Aalto University, Finland
Title:Trust and Security for 5G and the Internet
Abstract: 5G is expected to provide ultra-reliable service. At the same time 5G is the next step in the evolution of the Internet. In the Internet legitimate services can fail due to unpredictable malicious activities including Distributed Denial of Service attacks, intrusions that use viruses, Trojans and botnets. 5G needs a significant step forward in its approach to security in order to claim to provide ultra-reliable service. On the level of interactions between senders and receivers we propose a new cooperative firewall technology, called Customer Edge Switching (CES) that admits all traffic flows based on policy. We describe our work on CES and towards dynamic policies that make use of the reputation of all Internet entities such as, hosts, customer networks, DNS servers and applications. The idea of to collect and share evidence of malicious activity, aggregate it into the reputation of the different entities and disseminate the reputation values to the cooperative firewalls that can step up their controls based on the overall security situation and the reputation. We describe our approach to deployment that should be feasible one network at a time. To achieve this, we propose a Realm Gateway and a step-wise deployment of trust/reputation processing into the network.
Speaker:
Prof. Xavier Fernando
Director, Ryerson Communications Lab
Ryerson University, Canada
Title: Upcoming Technologies for an Interconnected Society
Abstract: Information and communication technologies (ICT) have been changing the way we live. The changes are very significant in recent times. Social media has become a part of our lives and goes way beyond being a fun accessory. It has played a key role in unraveling people power and creating collective opinions in places like Egypt and Libya. Especially, it enables likely minded people to be connected and share many things. The difference between computers, phones, cameras, televisions, audio players and even the bank machine is diminishing as a single device can perform all these tasks and much more. Our kids use more ‘i’ devices than toys. Photonic and radio technologies jointly enable anytime, anywhere, broadband wireless connectivity. Modern wireless technology provides numerous seamless services, from receiving images, video and tweets from the deep space to providing communications and tracking for underground miners. Internet of Things is expanding rapidly. The world is increasingly populated with sensors and connected devices that automatically communicate, make decisions and perform complex tasks. The power grid is getting smarter, self healing and more resilient. Autonomous, electric cars will soon be a common thing, receiving and redelivering electricity to the grid. Our homes will soon generate their own energy and be more interactive.
Speaker:
Prof. Afaq Ahmad
Department of Electrical & Computer Engineering,
Chair, Pre-Specialization Academic Advising Unit
College of Engineering
Sultan Qaboos University, Muscat, Oman
Title: Trustworthy Applications of Linear Feedback Shift Registers
Abstract: Many research areas of uses Linear Feedback Shift Registers (LFSRs) to solve the increasingly complex problems. With the pace of continuous developing of the information technology, enormous applications of LFSRs are achieved, and are successfully embedded in the systems. Some of the popular applications of LFSRs are visible in cryptography, testing, coding theory and wireless system communication. Each of the applications of LFSRs requires unique attributes and qualities. This contribution will highlight and describe various issues while LFSRs are used for different applications. In particular, the features such as less area, more efficiency, less power dissipation, low cost and more secure implementation will be discussed.
Speaker: Dr. Leonardo BOCCHI
Researcher, Department of Information Engineering
Electronic and Information Bioengineering
University of Florence
Florence Area, Italy
Title: Microcirculation Systems: Current Researches and Perspectives
Abstract:The microcirculation is where the exchange of substances between the blood and the tissues takes place. The study of microcirculatory hemodynamics thus provides the key for assessing the function of the organs perfused. In the last years, it has become clear that the microcirculation may play a significant role in the pathophysiology of many diseases, and not just those like e.g. diabetes, cardiovascular disease, scleroderma, that are conventionally assessed as being of microcirculatory origin. Several methodologies have been applied, together with with numerous tests involving different stimulation: thermal, ischemic, pharmacological, particularly for microcirculation assessment in the skin. The application of mathematical models, both physical and engineering, has initiated a new era with a rapidly increasing understanding of microvascular function. The current state of the art in this field thus includes several devices with different properties and features, providing complex data that is currently analyzed accordingly to various mathematical and physical models. However, the current lack of consensus, and the need for agreed guidelines, is delaying routine clinical application of what has been discovered.
Speaker:Pof. Rangaraj M. Rangayyan,
Department of Electrical and Computer Engineering
University of Calgary, Calgary, Alberta, Canada
Title: Computer-aided Diagnosis of Retinopathy of Prematurity
Abstract: The structure of the blood vessels in the retina is affected by diabetes, hypertension, arteriosclerosis, retinopathy of prematurity (RoP), and other conditions through modifications in shape, width, and tortuosity. Quantitative analysis of the architecture of the vasculature of the retina could assist in monitoring the evolution and stage of pathological processes, their effects on the visual system, and the response to treatment. Computer-aided detection, modeling, and quantitative analysis of features related to the retinal vascular architecture could assist in consistent, quantitative, and accurate assessment of pathological processes by ophthalmologists. This seminar provides details on digital image processing and pattern recognition techniques for the detection and analysis of retinal blood vessels, detection of the optic nerve head, modeling of shape for quantitative analysis of the temporal arcades, measurement of the thickness of retinal vessels, and detection of tortuous vessels. The techniques include methods for the detection of curvilinear structures, the Hough transform, Gabor filters, phase portraits, and specific algorithms for quantitative analysis of patterns of diagnostic interest. Analysis of a dataset of retinal fundus images of 19 premature infants with plus disease, a proliferative stage of RoP, and 91 premature infants without plus disease resulted in an area under the receiver operating characteristic curve of up to 0.98 using our parameter to quantify tortuosity. A graphical user interface is being developed to facilitate clinical application of the methods. The methods should assist in computer-aided diagnosis, follow up, and clinical management of premature infants possibly affected by RoP.
Speaker:Prof. Radim Burget.
Signal Processing Laboratory.
Department of Telecommunications, Brno University of Technology.
Brno, Czech Republic, European Union
Title: Signal processing and automation: Trends
Abstract: Industry 4.0 is a collective term embracing a number of contemporary automation, data exchange and manufacturing technologies. It had been defined as 'a collective term for technologies and concepts of value chain organization' which draws together Cyber-Physical Systems, the Internet of Things and the Internet of Services. Signal processing has big influence on this effort. Although many research work have been done in this area, its transfer from research laboratories into a business environment often fail very often. There are plenty of obstacles that prevents its deployment into industry. This presentation will provide overview of technologies being introduced in recent years, that has successful path from research lab into business deployment. Furthermore it will discuss complementary technologies related to the artificial intelligence the helps in industrial automation and security.
Speaker:Prof. H. Vakilzadian
Department of Electrical and Computer Engineering
University of Nebraska-Lincoln
Lincoln, Nebraska, United States
Title:Challenges in Development of a Simulation-based
Abstract: Mathematical modeling, computational algorithms, and the science and technology of complex and data-intensive high performance computing are having an unprecedented impact on the health, security, productivity, and competitiveness of the United States. Exploitation of new capabilities, however, is only achievable when basic research on the major components of computational modeling and simulation is performed. In electrical and computer engineering, advances in computational modeling and simulation offer rich possibilities for understanding the complexity of engineered systems, predicting their behavior, and verifying their correctness. Although modeling and simulation (M&S)has been around for several decades, its importance in research and application areas is just being exploited, especially with regard to challenges in M&S for engineering complex systems, according to a report by the U.S. National Science Foundation’s Blue Panel on Simulation-Based Engineering Science (SBES) [1], the White House’s American Competitiveness Initiative (ACI)[2], U.S. Congressional Caucuson M&S [3], and more [4-7]. The current state of M&S can be summarized as follows: 1. The importance of M&S in the design and development of physical systems is fairly well understood. 2. Research is moving ahead on challenges in M&S for engineering complex systems. 3. The references above all recommend the need for the emergence of an undergraduate discipline in SBES. 4. Major corporations offer great career opportunities for graduates with SBES knowledge. However, there is no known established program in electrical engineering which has identified the required skills, educational program requirements, training requirements, responsibilities, job descriptions, or labor codes. This presentation provides the elements of an M&S-based electrical engineering program and the challenges involved in the development and implementation of such a program for workforce development. This study was funded in part by NS Funder grantnumber 0737530.
Speaker:Prof. Yi Qian
Department of Electrical and Computer Engineering
University of Nebraska-Lincoln
Lincoln, Nebraska, United States
Title:Security for Mobile Wireless Networks
Abstract: Wireless communication technologies are ubiquitous nowadays. Most of the smart devices have Cellular, Wi-Fi, Bluetooth connections. These technologies have been developed for many years, nonetheless they are still being enhanced. More development can be expected in the next 5 years, such as faster transmission data rate, more efficient spectrum usage, lower power consumption, etc. Similarly, cellular networks have been evolved for several generations. For example, GSM as part of 2G family, UMTS as part of the 3G family, and LTE as part of 4G family. In the next few years, cellular networks will continue the evolution to keep up with the fast-growing needs of customers. Secure wireless communications will certainly be part of other advances in the industry such as multimedia streaming, data storage and sharing in clouds, mobile cloud computing services, etc. This seminar gives an overview of the recent development in security for next generation wireless networks, especially in LTE/LTE-A and 5G mobile wireless networks. It will also give a discussion on the trend and future research directions in this area.
Speaker:Prof. Dr. Roland Petrasch
Department of Computer Science
Faculty of Science and Technology
Thammasat University Rangsit Campus,
Patumthani, 12121 THAILAND
Title:Industry 4.0 and Smart Manufacturing - What are the New Technological Concepts?
Abstract: The term Industry 4.0 is used frequently with respect to German industry since 2011. It is often described as the new (fourth) industrial revolution enabling suppliers and manufacturers to leverage new technological concepts like CPS (Cyber-Physical Systems), Internet of Things, Big Data and Cloud Computing (CC): New or enhanced products and services can be created, cost be reduced and productivity be increased. Similar terms are Smart Factory or Smart Manufacturing. The ideas, concepts and technologies are not hype anymore - they are at least partly reality, but there is still a lot to do, e.g. standardization. What are these new (and old) technologies like IIoT (Industrial Internet of Things), Internet of Services, Cloud Computing, Big Data, CPS (Cyber-Physical Systems) behind Industry 4.0 and Smart Manufacturing? How do the components and technologies together? What are new or better applications in the context of Industry 4.0? This talk provides an overview and gives some answers to these questions.
Speaker:Dr. S. Dhanjal, P. Eng.
Dept of Computing Science
Thompson Rivers University
KAMLOOPS, BC V2C 0C8, CANADA
Title: Digital Speech Processing of Two Languages: English and Punjabi
Abstract:Digital Speech Processing has many practical applications, including speech analysis/synthesis, speaker verification/identification and language identification. It is a research area that involves Computing Science, Electrical Engineering, Mathematics, Statistics, Linguistics, and Phonetics. Human speech is very complicated and no computer model can account for all the characteristics of speech production. However, the linear prediction analysis/synthesis model has gained popularity in digital speech processing because the mathematical theory is well-known, and the quality of speech synthesized by this model is almost indistinguishable from the original speech. With more than 140 million speakers in 150 countries, the Punjabi language is amongst the top 15 spoken languages. Although English and Punjabi have totally different phonetics, they have been investigated using linear prediction analysis. This talk will outline the problems encountered during the linear prediction analysis/synthesis of these two languages. It will be of interest to research scholars in many fields: Computing Science & Engineering, Information Technology, Linguistics, Literature, Mathematics, Computerized Speech Analysis & Synthesis, Natural Language Processing, and applications of Linear Algebra.
Speaker:Prof. Dr. Sven-Hendrik Voss
Beuth Hochschule f�r Technik Berlin
University of Applied Sciences
Luxemburger Stra�e 10
13353 Berlin
Title: Towards Unique Performance using FPGAs in Modern Communication, Data Processing and Sensor Systems
Abstract: Modern innovative applications like machine-to machine (M2M) communication, multi-gigabit data networks, extensive sensor networks or data acquisition and big data analytics require an enormous amount of processing power and bandwidth. The traditional approach to deploy a processing and transmission infrastructure by cascading multicore CPUs, using offload engines and GPU cores is usually expensive and not always practical, thus building an obstacle in front of creative and innovative applications. This talk gives an overview of innovative approaches in digital hardware design far away from CPU load dependencies and multithread workarounds but with decisive hints towards fully integrated hardware solutions, thus opening doors for higher bandwidth, processing capabilites, reliabiliy and resolution, as well as least possible latency. The use of Field Programmable Gate Arrays (FPGAs) in combination with a sophisticated design methodology has proven to overcome many of the usual obstacles related to complex applications and enable high efficiency implementations. Intelligent circuit design helps in decreasing implementation size and power consumption. The described approaches will be reflected by specific design examples of challenging applications. In addition an overview of future research within this field is presented.
Speaker:Yukio Ohsawa, PhD, Professor
Department of Systems Innovation
School of Engineering, The University of Tokyo
113-8656 Tokyo, Japan
Title: Discovery without Learning - A Lesson from Innovators Marketplace on Data Jackets
Abstract:In the workshop called Innovators Marketplace on Data Jackets (IMDJ), as I talked in SPIN2015, participants exchange abstracts of their data, requirements for data, and knowledge about data science, so that they discover ways to use/reuse/collect data. A lesson we learned from IMDJ recently is that users of data need methods for Discoveries without Learning, because they seek clues for decision making from data without significant patterns or coherent causalities. I this talk I show simple algorithms including Tangled String, applied to time series of earthquakes and of human behaviors in markets. The results show the approach of Discovery without Learning externalized useful clues for decision making.
Speaker:
Professor C. Sidney BurrusTitle:FIR Filter Design using Lp Approximation
Abstract: This paper applies the iterative reweighted least squares (IRLS) algorithm to the design of optimal Lp approximation filters. The algorithm combines a variable ptechnique with a Newton's method to give excellent robust initial convergence and quadratic final convergence. Details of the convergence properties when applied to the Lp optimization problem are given. The primary purpose of Lp approximation for filter design is to allow design with different error criteria in pass and stopband and to design constrained Lp approximation filters. The new method can also be applied to the complex Chebyshev approximation problem, and to the design of two-dimensional FIR filters. Initial work on the application to IIR filters has been made.
Speaker:Ivan Linscott
PI Radioscience Experiment
Electrical Engineering Department
Stanford University, USA
Title:First Results from The New Horizons Encounter at Pluto
Abstract: The instruments on board the New Horizons spacecraft, measured key characteristics of Pluto and Charon during the July 14, 2015, flyby. The data collected is being transmitted to Earth over the next 16 months. To date, high resolution images have been obtained along with spatially resolved spectroscopy in the infrared and ultraviolet revealing a world of extraordinary character. Additionally, during flyby the Radio Science Experiment (REX), in the NH X-band radio transceiver, recorded powerful uplink transmissions from Earth stations, as well as broadband radiometric power from the surface of Pluto and Charon. The REX recording of the uplinks produced a precise measurement of the surface pressure, the temperature structure of the lower atmosphere, and the surface radius of Pluto. In addition, REX measured thermal emission to a precision of 0.1K, from Pluto at 4.2-cm wavelength during two linear scans across the disk at close range when both the dayside and the night side were visible. This work was supported by NASA’s New Horizons project.
Speaker:Dr. Pan Agathoklis
Dept of ECE, University of Victoria
P.O. Box 1700, Victoria, B.C., V8W 2Y2, CANADA
Title:x-fast Image and Video Editing in the Gradient Domain using a Wavelet based Approach.
Abstract: There are many applications where a function has to be obtained by numerically integrating gradient data measurements. In signal and image processing, such applications include possible digital photography where the cmera is sensing changes in intensity instead of intensity as it is the case in most cameras today, rendering high dynamic range images on conventional displays as well as editing and creating special effects in images and video. A common approach to deal with this multi-dimensional (mD) numerical integration problem is to formulate it as a solution of an mD Poisson equation and obtain the optimal least-squares solution using any of the available Poisson solvers. Another area of application is in adaptive optics telescopes where wave front sensors provide the gradient of the wave front and it is required to estimate the wavefront by essentially integrating the gradient data in real time. Several fast methods have been developed to accomplish this, such as Multigrid Conjugate Gradient and Fourier transform techniques similar to those used in machine vision. A new 2 and 3-D reconstruction method based on wavelets has been developed and applied to image reconstruction for adaptive optics, image and video editing. This method is based on obtaining a Haar wavelet decomposition of the image directly from the gradient data and then using the well known Haar synthesis algorithm to reconstruct the image. This technique further allows the use of an iterative Poisson Solver at each iteration to enhance the visual quality of the resulting image and/or video. This talk focuses on image reconstruction techniques from gradient data and discusses the various applications where these techniques can be applied. They range from applications to advanced optical telescopes to image and video editing, shape from shading etc.
Speaker:Prof. Takeshi Onodera
Research and Development Center for Taste and Odor Sensing Kyushu University
Fukuoka-shi, 819-0395, Japan
Title:Highly Sensitive Detection of Explosives Using Surface Plasmon Resonance Biosensor
Abstract: The presentation will focus on the recent developments in an “electronic dog nose” based on a portable surface plasmon resonance (SPR) sensor system and monoclonal antibodies to explosives for trace detection of explosives. We developed sutable sensor surfaces for the SPR sensor for on-site detection. For the SPR sensor to detect trace amount of explosives, the molecules of the explosives have to be dissolved in a buffer solution. Therefore, we have developed not only the appropriate sensor surfaces but also originally developed antibodies, collection procedure for trace explosives, and a protocol for on-site detection of explosives on the SPR sensor system. The sensor surface modified with self-assembled monolayers (SAMs) and portable type of SPR sensor systems were developed for on-site sensing. 5.7 pg/mL (ppt) of limit of detection (LOD) for 2,4,6-trinitrotoluen (TNT), which is one of explosives, was achieved using a combination of an indirect competitive assay (inhibition assay) formant and a polymer brush-modified sensor surface. To realize fast TNT detection, we also adopted a displacement method on the SPR system. In the displacement method, an antibody solution and a TNT solution do not require premix before measurement and can be injected sequentially. Judgement of detection can be used the slope of sensorgram in 10 s after the injection of the TNT solution. The LOD of TNT on the displacement assay format with a one-minute flow of TNT solution was 0.9 ng/mL (ppb), when a SAM surface containing ethylene glycol chain with DNP-glycine was used. Furthermore, a demonstration experiment of TNT detection in one min carried out successfully using the portable SPR sensor system with the displacement assay format and sample collection by wiping.
Speaker:Jean-Pierre Leburton
Gregory Stillman Professor of Electrical and Computer Engineering,
Beckman Institute for Advanced Science& Technology.
University of Illinois at Urbana-Champaign, USA
Title:Genomics with Semiconductor Nanotechnology
Abstract: In the recent years there has been a tremendous interest in using solid-state membranes with nanopores as a new tool for DNA and RNA characterization and possibly sequencing. Among solid-state porous membranes the use of the single-atom thickness of monolayer graphene makes it an ideal candidate for DNA sequencing as it can scan molecules passing through a nanopore at high resolution. Additionally, unlike most insulating membranes, graphene is electrically active, and this property can be exploited to control and electronically sense biomolecules. In this talk, I will present a scenario that integrates biology with graphene-based field-effect transistor for probing the electrical activity of DNA molecules during their translocation through a graphene membrane nanopore, thereby providing a mean to manipulate them, and potentially identify by electronic technique their molecular sequences. Specifically, I will show that the shape of the edge as well as the shape and position of the nanopore can strongly affect the electronic conductance through a lateral constriction in a graphene nanoribbon as well as its sensitivity to external charges. In this context the geometry of the graphene membrane can be tuned to detect the rotational and positional conformation of a charge distribution inside the nanopore. Finally I show that a quantum point contact (QPC) geometry is suitable for the electrically-active graphene layer and propose a viable design for a graphene-based biomolecule detecting device.
Speaker:Patrick Gaydecki
Professor, Sensing, Imaging and Signal Processing Group
School of Electrical and Electronic Engineering
University of Manchester Manchester M60 1QD, United Kingdom
Title:Real-time Digital Emulation of the Acoustic Cello using dCello
Abstract:We describe a device called dCello, which modifies the sound produced by an electric cello, producing an output signal which, when fed to an amplifier and loudspeaker, approximates closely the timbre of a high quality acoustic equivalent. Although the engineering details of the system are complex, the principles are straightforward. The signal produced by the pickup from the electric cello is first fed to a high-impedance preamplifier, converted into digital form and then processed by a digital signal processor operating at 550 million multiplication-additions per second (MMACs). The algorithm on the DSP device functions as the body of a wooden cello, which the electric cello lacks. It also operates so quickly that there is no perceptible delay between the bow striking the string and the corresponding sound generated by the amplifier. The unit incorporates a number of other functions to optimize the characteristics of the output to suit the acoustic properties of the ambient space or player preferences. These include a 20-band graphic equalizer, a versatile arbitrary equalizer, a volume control and an adjustable blender. The blender, which combines the original with the processed signal, extends the scope of the system for use with acoustic instruments fitted with pickups on the bridge. The unit is controlled by Windows-based software that allows the user to download new responses and to adjust the settings of the volume (gain), graphic equalizer and arbitrary equalizer. The device has already been used by a professional cellist during her performance at a music festival in the Netherlands, to considerable acclaim.
Speaker:Carlos M. Travieso-Gonz�lez
Vice-Dean,University of Las Palmas de Gran Canaria
Institute for Technological Development and Innovation in Communications (IDeTIC)
Signals and Communications Department, Campus Universitario de Tafira, s/n
Pabell�n B - Despacho 111, 35017 - Las Palmas de Gran Canaria, SPAIN
Abstract:he research for Neurodegenerative diseases is increased during the last year and new techniques and methods are proposed. It is based on the relationship of the humanity and emotion, that cannot be separated and that is innate to humans. It has therefore been of great interest your study. They are trying to analyse why and how your emotions occur, trying to relate the events or reactions, physical and internal human body, in order to answer these questions, and be able to distinguish these emotions. A way of its detection is shown in this keynote. In particular, an automatic detection level of excitement or arousal is proposed through the labial movement of a person. This system is an innovative system, and nothing invasive, which can help the previous diagnosis and prolonged follow-up of a patient with various psychological or neurodegenerative disorders.
Speaker:Professor Juan Luis Castro
Department of Computer Science and Artificial Intelligence
University of Granada, Spain
Title: From Tags Cloud to Concepts Cloud
Abstract: The spread of Web 2.0 has caused user-generated content explosion. Users can tag resources in order to describe and organize them. A tag cloud provides rough impression of relative importance of each tag within the overall cloud in order to facilitate browsing among numerous tags and resources. The main failing of these systems is that alternative tags can be used for the same concept and it can distort the tag cloud. In this lecture we analyse Tags Recomender Systems and Tags Cloud Representation, focus on systems able to create conceptually extended folksonomies. In this folksonomies each concept is represented as a set of multi-terms (alternative tags for the same concept), and the tag cloud is represented by using for every concept a canonical concept label. We will present TRCloud, a tag recommender system able to create a conceptually extended folksonomy from scratch. It uses an hybrid approach to detect an initial set of candidate tags from the content of each resource, by means of syntactic, semantic, and frequency features of the terms. Additionally, the system adapts the weights of the rest of candidates when a user select a tag, in function of syntactic and semantic relations existing among tags.
Speaker:David M. Nicol
Director, Information Trust Institute
Franklin W. Woeltge Prof. of Electrical and Computer Engineering
University of Illinois at Urbana-Champaign
Urbana, Illinois, United States
Title:Modeling Trust in Integrated Networks
Abstract: Trust in a complex system is an expectation that the system will behave as expected, even in contexts and scenarios that were unforeseen. Development of trust models and means of evaluating them is a key problem in the design of integrated networks, which embody hierarchy, composition, and separation of function. Different layers have different trust attributions (e.g., one may focus on provisioning of connectivity, another on provisioning of bandwidth). The challenge for us is to develop means of reasoning about the overall end-to-end trust in the system, perhaps by composing trust models that have been developed for different layers. This talk identifies the issues and suggests an approach in the context of network access control.
Speaker:Prof. K�roly Farkas
Department of Networked Systems and Services,
Budapest University of Technology
& Economics,
Hungary, European Union
Title:Smart City Services Exploiting the Power of the Crowd
Abstract: Collecting data and monitoring
our environment give the basis for smart city applications which are
getting popular today. However, the traditional approach to deploy a
sensing and monitoring infrastructure is usually expensive and not
always practical forming an obstacle in front of creative and innovative
application development.
Mobile crowdsensing can open new ways for data
collection and smart city services. In this case, mobile devices with
their built-in sensors and their owners are used to monitor the
environment and collect the necessary data usually in a real-time manner
with minimal cost. Thus, the power of the crowd can be exploited as an
alternative of infrastructure based solutions for developing innovative
smart city services.
In this talk, we give a short overview about the
European COST ENERGIC Action (IC1203) focusing on the potentials of
mobile crowdsensing in smart city services; the use of crowdsourced
geographic data in government; and the requirements for a generic
crowdsensing framework for smart city applications. Moreover, we present
some case studies and sample scenarios in this field, such as a smart
timetable service of a travel planner, which can be updated in real-time
based on the continuously monitored time gaps by passengers between
consecutive buses on public transportation routes.
Speaker:Professor, Bj�rn ��r J�nsson
School of Computer Science,
Reykjav�k University, Iceland
Title:Are We There Yet? � Towards Scalability of High-Dimensional Indexing
Abstract: Due to the proliferation of tools and techniques for creating, copying and sharing digital multimedia content, large-scale retrieval by content is becoming more and more important, for example for copyright protection. Recently proposed multimedia description techniques typically describe the media through many local descriptors, which both increase the size of the descriptor collection and require many nearest neighbour queries. Needless to say, scalability of query processing is a significant concern in this new environment. To tackle the scalability, two basic categories of approaches have been studies. The typical �computer-vision-based� approach is to compress the descriptors to fit them into memory, while the typical "database-based� approach tackles scale by dealing gracefully with disk accesses. In order to cope with the Web-scale applications of the future, we argue a) that disk accesses cannot be ignored, and b) that scale can no longer be ignored in research on multimedia applications and high-dimensional indexing. This talk will give an overview of some major scalability results in the literature, with a strong focus on the database-based methods.
Speaker: Prof. Dong Hwa Kim
Dept. of Electronic and Control Eng.,
Hanbat National University, South Korea
Title:Smart City and ICT in Korea
Abstract: This lecture provides e-governance and new paradigm for knowledge based society using ICT. This lecture presents Seoul city as example of e-governance such as smart grid, smart city of Korea. With increasing this ICT, many countries have been investing for e-governance and the world�s major cities have embarked on smart city as one of e-government. For instance, Seoul, New York, Tokyo, Shanghai, Singapore, Amsterdam, Cairo, Dubai, Kochi and Malaga, and so on. Korea has strong competiveness in ICT such as ICT Development Index (ITU, 2011): ranking 1 among 159 Countries, E-Government Readiness Index (UN, 2010): ranking 1 among 192 Countries. Korea is also at global top-level in ICT infrastructure and service penetration all over the world. Recently, by using these infrastructures, we are preparing knowledge based new paradigm. That is, this ICT made Seoul�s implementation of its �Smart Seoul 2015� project, providing a best-practice guide to the construction and operation of a smart city, smart grid, and energy. Especially Seoul has the best condition for smart city (e-governance) such as ICT Infrastructure: Efforts to develop ICT infrastructure must anticipate future service demands; A well-defined �integrated city-management framework�; Increasing access to smart devices and education on their use, across income levels and age groups. And this lecture will provide R&D program of Korea and cooperation. Conclusion suggests many possible approaches and why it is important at this point to cooperate and how we can obtain a good idea for cooperation.
Speaker:Professor Kiyoshi Toko
Distinguished Professor, Director,
Graduate School of Information Science and Electrical Engineering.
Kyushu University, Fukuoka, Japan
Title: Biochemical Sensors for Gustatory and Olfactory Senses
Abstract: Physical sensors have been developed since old days and utilized in the world, but chemical sensors which play the role of gustatory and olfactory senses have not been developed so far. Recently, these sensors have made rapid progress, and are named electronic tongues and noses, respectively. A taste sensor, which is a kind of electronic tongues, utilizes lipid/polymer membranes as the receptor part of taste substances. This sensor has a property of global selectivity that implies a potential to decompose the taste into five basic taste qualities (sweetness, bitterness, sourness, saltiness, umami) and quantify them. The taste sensor system is composed of, at least, five different sensor electrodes, each of which responds to several kinds of chemical substances with the same taste in a similar way, but shows no response to substances with other taste qualities. The taste sensor is now sold in the world and utilized in food and pharmaceutical companies. On the other hand, there are many types of electronic noses according to materials and measurement principles such as oxide semiconductor, quartz crystal microbalance (QCM), surface plasmon resonance (SPR), and conductive polymer. An electronic nose with SPR and antigen-antibody interaction can detect explosives such as trinitrotoluene (TNT) at ppt level, which is superior to dog noses. This electronic dog nose just comes into real use.
Speaker:Masahito Togami, Ph.D.
Senior Researcher, Unit Leader, Intelligent Media Systems
Research Department, Central Research Laboratory
Hitachi Ltd., Japan
Title:Time-Varying Multichannel Gaussian Distribution Model for Speech Dereverberation and Separation
Abstract:In this talk, I will introduce a recently proposed time-varying multichannel Gaussian distribution model for speech signal processing, which reflects time-varying characteristics of speech sources. The time-varying multichannel Gaussian distribution model can be integrated naturally with several Gaussian based methods, e.g. Kalman filtering, Multichannel Wiener filtering. In the time-varying multichannel Gaussian distribution model, it is easy to put on and take off Gaussian distribution models for specific purposes. Additionally, optimization of the parameters are done efficiently by using the EM algorithm. In addition to introduction of the time-varying multichannel Gaussian distribution model, I introduce applications of the multichannel Gaussian distribution model for noise reduction, dereverberation, and echo reduction.
Speaker: Mort Naraghi-Pour, Ph.D.
Michael B. Voorhies Distinguished Associate Professor
School of Electrical Engineering and Computer Science
Louisiana State University, Baton Rouge, LA, USA
Title:Hypothesis Testing in Wireless Sensor Networks in the Presence of Misbehaving Nodes
Abstract: Wireless sensor networks
(WSNs) are used in many military and civilian applications including
intrusion detection and surveillance, medical monitoring, emergency
response, environmental monitoring, target detection and tracking, and
battlefield assessment. In mission critical applications of WSNs, the
security of the network operation is of outmost importance. However,
traditional network security mechanisms are not adequate for distributed
sensing networks. This is due to the fact that these networks cannot be
physically secured making the senor nodes vulnerable to tampering. For
example, an adversary may tamper with legitimate sensors or deploy its
own sensors in order to transmit false data so as to confuse a central
processor. False data may also be due to sensor node failures. In large
WSNs with hundreds or thousands of nodes, many nodes may fail due to
hardware degradation or environmental effect.
In this talk we consider an important application,
namely the problem of detection using WSNs in the presence of one or
more classes of misbehaving nodes. Binary hypothesis testing is
considered along with decentralized and centralized detection. In the
former case the sensors make a local decision and transmit that decision
to a fusion center. In this case we identify each class of nodes with
an operating point (false alarm and detection probabilities) on the ROC
(receiver operating characteristic) curve. In the latter case the sensor
nodes transmit their raw data to the fusion center. In this case the
nodes are identified by the probability density function (PDF) of their
observations. To classify the nodes and detect the underlying
hypothesis, maximum likelihood estimation of the operating point or the
PDF of the sensors� observations is formulated and solved using the
Expectation Maximization (EM) algorithm with the nodes� identities as
latent variables. It is shown that the proposed method significantly
outperforms previous techniques such as the reputation based methods.
Speaker:Patrick Gaydecki
Professor, Sensing, Imaging and Signal Processing Group
School of Electrical and Electronic Engineering
University of Manchester Manchester M60 1QD, United Kingdom
Title:A commentary on Theories of Time and their Implications for Digital Signal Processing
Abstract: The concept of absolute time was introduced by Newton in his work �Philosophi� Naturalis Principia Mathematica�, in which it was stated that time existed independent of any reference and any observer, flowing uniformly without regard to external influences or factors. This provided a theoretical foundation for the establishment of Newtonian mechanics and continues to be applied, successfully, in our quantitative treatment of physical processes. Since then there have been several revolutions in our understanding of time, all of which to a lesser or greater degree reveal that time cannot, on the macroscopic (Einsteinian) or microscopic (quantum) scale be considered as absolute and uniform, but instead is inextricably linked to a particular frame of reference and the fine-grain structure of the universe. This paper seeks to explore key concepts in our understanding (and misunderstanding) of time, and how the measurement of time is central to digital signal processing, itself predicated on regular, periodically sampled information.
Speaker:Professor Philip Hall,
Distinguished Lecturer, IEEE Society on Social Implications of Technology (SSIT),
Department of Electrical & Electronic Engineering ,
The University of Melbourne, Australia
Title:Climate Divergence � Fact or Fiction? Synoptic Characterisation as a Methodology for Short-to-Medium Climate Analysis and Forecasting
Abstract: It is widely accepted that global climate change is
having an increasingly dramatic impact on water, energy and food
security. Establishing a connection between regional climate variability
and rainfall delivery variability associated with extreme events will
enable us to gain an improved understanding of the potential impacts of
climate change on essential human activities � such as broadacre farming
� via the rainfall delivery mechanism. Therefore, being able to
understand these events and their transitional behaviours is of prime
importance.
Characterisation methodologies have, to date, not been
widely used to study meteorological phenomena. Where they have been
successfully applied for this purpose, climate data more than synoptic
data has been used and the primary focus has been on analysing the
medium-to-long term trends rather than trends in short-to-medium term
climate variability. However, historical synoptic data shows that recent
climate variability displays greater divergence from the long term
trend, suggesting that short-to-medium term climate variability can be
analysed using the synoptic characteristics of the delivery mechanism
rather than the occurrence of extreme events. This characteristic of
climate change, being the trend of short-to-medium term variability of
atmospheric parameters from the long term trends, is defined by the
author as climate divergence. Importantly, therefore, if we are to
understand the variation in rainfall delivery and water availability
associated with climate change and its potential impact on natural
resources and reliant human activities � such as soil and agriculture �
then we must consider the climate divergence from long term trends (both
historical and future forecasts), rather than the long term trends
themselves.
Synoptic characterisation (in the meteorological
context) is defined as a technique that uses synoptic data to identify
and study the distinctive traits and essential dynamic features, such as
behavioral characteristics and trends, of atmospheric variables
associated with meteorological phenomena. This paper seeks to
demonstrate that synoptic characterisation (meteorological) can be used
to assist us establish a connection between climate divergence and
deviations in rainfall patterns, and thus can be adapted as an effective
short-to-medium term climate analytical and forecasting tool.
The development of such a tool, together with better
monitoring technologies and data collection options, will provide a
framework for better decision making and risk management. Information
gained through the synoptic characterisation of regional climate, in
conjunction with other data gathering activities, can enhance the basis
for studies that provide a large portion of the data required for
evaluating and validating numerical regional and global scale climate
models. Information from these studies indirectly assists in the
evaluation of the impacts due to potential future climate changes on the
regional hydrologic system.
Speaker:Radim Burget.
Group Leader � Data Mining Group, Signal Processing Laboratory.
Department of Telecommunications, Brno University of Technology.
Brno, Czech Republic, European Union.
Title:Process Optimization and Artificial Intelligence: Trends and Challenges
Abstract: Business process optimization have become increasingly attractive in the wider area of business process intelligence. Although many research work have been done in this area, its transfer from research laboratories into a business environment often fail very often. There are plenty of obstacles that prevents its deployment into industry. This presentation will provide overview of technologies being used in one of the system, that has successful path from research lab into business deployment. Furthermore it will discuss complementary technologies related to the artificial intelligence the helps in controlling complex processes.
Speaker:Chris Rizos
President, International Association of Geodesy (IAG)
Professor, Geodesy & Navigation
Surveying & Geospatial Engineering
School of Civil & Environmental Engineering
The University of New South Wales,
Sydney, AUSTRALIA
Title:Precise GNSS Positioning � the Role of National and Global Infrastructure and Services.
Abstract: Precise positioning � defined
broadly as positioning accuracy higher than about one metre � is
something that GPS was never intended to deliver. However, starting in
the 1980s, a series of innovations ensured that centimetre-level
accuracy could be achieved. The primary innovation was the development
of the differential or relative GPS positioning mode, whereby
positioning of a receiver, in real-time and even if moving, was done
using GPS data from a static reference station. DGPS was refined over
the 1980s and 1990s to become an extremely versatile precise positioning
and navigation tool. It has revolutionised geodesy, surveying, mapping
and precise navigation. Furthermore, since the 1990s many governments,
academic institutions and private companies have established
�continuously operating reference stations� (or CORS) as fundamental
national positioning infrastructure. In 1994 the International GPS
Service (IGS) was launched, characterised by a globally distributed GPS
CORS network (now numbering over 400 stations) whose data was used to
compute precise satellite orbit and clock information. Such a service
continues to provide vital information to support geoscience, national
geodetic programs, and precise positioning in general.
We are witnessing the launch of a surge of new
navigation satellite systems, with a commensurate increase in satellites
and signals, new receiver techniques and an expansion in precise
positioning applications. This heralds the transition from a
GPS-dominated era � that has served the community for almost 30 years �
to a multi-constellation Global Navigation Satellite System (GNSS)
world. These new GNSSs include the modernized U.S. controlled GPS and
the Russian Federation�s GLONASS constellations, China�s new BeiDou
system, the E.U. Galileo, as well as India�s Regional Navigation
Satellite System (IRNSS), and Japan�s Quasi-Zenith Satellite System
(QZSS). Next generation CORS infrastructure is being deployed, and new
precise positioning products are being generated. In addition, new
positioning techniques not based on DGPS principles are being developed.
One that shows considerable promise is the Precise Point Positioning
(PPP) technique. Furthermore, precise positioning is becoming mainstream
and it is predicted that a massive new class of users will embrace the
precise GNSS positioning technology. This paper will explore
developments in precise GNSS positioning technology, techniques,
infrastructure, services and applications.
Speaker:Prof.Dr.C.P.Schnorr
Johann Wolfgang Goethe Universit�t
Fachbereich Mathematik
AG Mathematische Informatic 7.2
Frankfurt, Germany
Title:Towards Factoring Integers by CVP Algorithms for the Prime Number Lattice in Polynomial Time
Abstract: We report on progress in factoring large integers N
by CVP algorithms for the prime number lattice L. For factoring the
integer N we generate vectors of the lattice L that are very close to
the target vector N that represents N. Such close vectors yield
relations mod N given by pn-smooth integers u, v, |u - v N|, that factor
over the first n primes p1,.. pn. We can factor N given about n such
independent relations u, v, |u - v N| Recent improvements.
� We perform the stages of
enumerating lattice vectors close to N according to their success rate
to provide a relation u, v, |u - v N|. The success rate is based on the
Gaussian
volume heuristics that estimates the number of lattice points in a
sphere of dimension n of a given radius and having a random center.
� In each round we randomly fine
each prime pi for i=1,�,n with probability 1/2 by doubling the pi
coordinates of the vectors in L. By the random fines we generate
independent relations mod N.
� We extremely prune the
enumeration of lattice vectors close to N generating a very small
fraction of close vectors efficiently, still providing n relations mod
N.
� The original method creates
pn-smooth u, v, |u - vN|. We must extend the method to non-smooth v
because for large N there are not enough relations with smooth v. The
smoothness of v does not help to factor N it merely results from the CVP algorithm for L.
Right now we create one relation mod N for N � 1014
and n = 90 primes in 6 seconds per relation. For much larger N there are
not enough relations with pn-smooth v. but there exist enough relations
for arbitrary v. A main problem is to extend the method for directing
and pruning the search for successful v from smooth to arbitrary v. For
N � 2800 and n = 900 primes there are about 2.5 � 1011 relations mod N
corresponding to lattice vectors close to some target vector Nv that
represents vN , enough relations for the efficient generation of 900
relations mod N and to achieve a new record factorization. So far we
have implemented the algorithm only for pn-smooth v. Now we extend it to
arbitrary v. Importantly, the prime basis for the CVP method is much
smaller than for any other known factoring method.
Speaker:Professor Stephen Pistorius,
Director of Medical Physics Graduate Program
Vice Director: Bio-Medical Engineering Graduate Program
Cancer-Care Manitoba, Canada
Title:Signal Processing and Analysis of Microwave Signals for Breast Cancer Imaging and Detection
Abstract: Annually, approximately 1.3
million women worldwide will be diagnosed with breast cancer and about
465,000 will succumb to it, particularly in regions where access to
screening is limited. Early detection and effective treatment are major
factors contributing to long-term survival. X-Ray mammography is the
current standard for the detection of breast cancer. While x-ray
mammography has led to a decrease in mortality rates, high capital and
human resource requirements as well as significant false positive and
negative rates offers room for improvement.
Microwaves have been used to retrieve quantitative and
qualitative images of objects-of-interests (OI) for many years. Since
the late 1970�s, Microwave Imaging (MWI) has been investigated for
biomedical applications including systems for imaging animal
extremities, chemotherapy monitoring, calcaneus and heel imaging, and
breast cancer detection and imaging. This technology is based on the
differences between the dielectric properties of healthy and malignant
breast tissues in the microwave frequency range. MWI may prove to be
less harmful and stressful for the patient, since it does not require
breast compression, the signals are not ionizing and have a power of
less than 10 dBm. There are various options in the design of biomedical
MWI systems, as well as the associated options in the mathematical
formulation of the corresponding scattering problem. These options
impact the imaging performance of the system, e.g. different algorithms
and regularization techniques have been implemented to treat the
inherent nonlinearity and ill-posedness of such problems as well as
different experimental techniques used to collect data for the
algorithms.
The two major MWI modalities are Microwave Tomography
(MT) and Breast Microwave Radar (BMR). The basic MWI experimental system
consists of a chamber where the OI to be imaged, is placed. Microwave
are introduced via antennas within the chamber. The microwave field or
signal are measured using antennas, solid-state sensors or field probes
distributed inside the chamber. MT techniques form a dielectric profile
using electromagnetic waves of selected microwave frequencies and by
solving a nonlinear and ill-posed inverse scattering problem. Breast
Microwave Radar (BMR) uses Ultra Wide Band (UWB) signals to form a
reflectivity map of the scanned region. While BMR approaches cannot
generate a dielectric map, they determine the location of strong
scattering signatures which are associated with malignant lesions and
are capable of forming high contrast 3D images where mm size inclusions
can be resolved.
During the last ten years, our research groups have been
working on the development of novel reconstruction algorithms and
sensing technologies to increase the quality of MT and BMR images. This
presentation will focus on a number of novel approaches that we are
investigating. These include i) BMR Holography, which processes the
spectrum of the recorded responses from the breast structure and
compensates for the effect of the scan geometry to create an accurate
reconstruction, ii) the Modulated Scattering Technique (MST), which uses
small probes to reduce the field perturbation, allowing simpler MT
inversion techniques and iii) the use of small spintronic devices which
can detect the amplitude and phase of the microwave signal over a wide
frequency band in order to determine the time delay of a microwave
signal scattered by the target.
These techniques require advanced signal processing and
analysis in order to reconstruct images of objects that have small radar
cross sections and small contrast to noise ratios. In this
presentation, I will describe the techniques we are applying and will
use phantom and patient images to illustrate the benefits and challenges
that still face us.
Speaker:Prof. Chin-Hui Lee, PhD
Professor, Center for Signal and Image Processing.
School of Electrical and Computer Engineering.
Georgia Institute of Technology. Atlanta, GA. 30332-0250, USA
Title:Discriminative Training from Big Data with Decision-Feedback Learning
Abstract: Recently discriminative
training (DT) has attracted new attentions in speech, language and
multimedia processing because of its ability to achieve better
performance and enhanced robustness in pattern recognition than
conventional model training algorithms. When probabilistic densities are
used to characterize class representations, optimization criteria, such
as minimum mean squared error (MMSE), maximum likelihood (ML), maximum a
posteriori (MAP), or maximum entropy (ME), are often adopted to
estimate the parameters of the competing distributions. However the
objective in pattern recognition or verification is usually different
from density approximation. On the other hand decision-feedback
learning (DFL) adjusts these parameters according to the decision made
with the current set of estimated discriminants such that it often
implies learning decision boundaries. In essence DLF attempts to jointly
estimate all the parameters of the competing discriminants all together
to meet the performance requirements of a specific problem setting.
This provides a new perspective in the recent push of Big Data
initiatives especially in cases when the underlying distributions of the
data are not completely known.
The key to DFL-based DT is that a decision
function that determines the performance for a given training set is
smoothly embedded in the objective functions so that their parameters
can be learned by adjusting their current values to optimize the desired
evaluation metrics in a direction guided by the feedback obtained from
the current set of decision parameters. Some popular performance
criteria include minimum classification error (MCE), minimum
verification error (MVE), maximal figure-of-merit (MFoM), maximum
average precision (MAP), and minimum area under the receiver operating
characteristic curve (MAUC).
In theory the DFL-based algorithms
asymptotically achieve the best performance almost surely for a given
training set with their corresponding features, classifiers and
verifiers without using the knowledge of the underlying competing
distributions. In practice DFL offers a date-centric learning
perspective and reduces the error rates by as much as 40% in many
pattern recognition and verification problems, such as automatic speech
recognition, speaker recognition, utterance verification, spoken
language recognition, text categorization, and automatic image
annotation, without the need to change the system architectures.
Speaker:Prof. Dr.-Ing. Ulrich Heute
Professor at the Faculty of Engineering (TF),
Christian-Albrechts-University Kiel since 10 / 93 D-24143 Kiel, Germany
Title:DSP for Brain Signals
Abstract:All human activities originate from and lead to electrical events in the central nervous system. The corresponding currents may be recorded by the well established electro-encephalography (EEG) or the relatively new magneto-encephalography (MEG). Both yield complementary information. Especially, MEG signals � with existing superconducting sensors (�SQUID MEG�) or with new room temperature sensors developed in a large project at Kiel � deliver tiny signals. Their measurement in an unshielded surrounding needs sophisticated analog pick-up electronics, filtering disturbances as much as possible. However, a certain component may be disturbing in one application, but carry information in another one. So, dedicated removal of well-defined signal parts after digitization of a not too much pre-processed signal is preferable.By means of Digital Signal Processing (DSP), activities which are irrelevant for a given investigation,but especially different types of noise as well as strong endogeneous and exogenous artifacts can be removed. Within the above �large project�, algorithms for this task have been developed and applied: Noise originating from various sources (thermal noise, shot noise, Barkhausen noise) can be treated via linear (digital) filters, adaptive Wiener filtering, or Empirical Mode Decomposition (EMD). By EMD, also slowly varying offsets (�trends�) can be removed, as well as muscle artifacts. Muscle and eyemovement artifacts may be tackled via Independent-Component Analysis (ICA), combined with, again, simple filtering or, better, Kalman filtering. Also, artifact components after ICA may have to be �cleaned� from other activities by Wiener filtering before removal. Among external artifacts, the classical power-supply harmonics have to be dealt with. Simple notch or comb filters have disadvantages; a �hybrid filter�, designed signal-adaptively in frequency domain and applied in time domain, is the best, though expensive solution � also for the strong harmonic disturbances from a deep-brain stimulator. For certain artifacts, reference signals may be available, e, g. an ECG for heartbeat artifacts in EEG and MEG; then, a compensation after adaptive equalization is possible. The same holds for eye-blinking artifacts with an additional oculogram measurement.
Speaker:Prof. Dong Hwa Kim
Professor at Dept. of Instrumentation and Control Engineering,
Hanbat National University,
16-1 Duckmyong dong Yuseong gu Daejeon, South Korea 305-719
Title: Research Experience on Artificial Intelligence and Emotion Control, and Realistic Information Exchange System
Abstract: First of all, this lecture
presents research experience such as immune system, genetic algorithm,
particle swarm optimization, bacterial foraging, and its hybrid system
and application to real system. This lecture will also show research
experience and results of emotion for emotion robot by AI. From research
experience, immune system, PSO (Particle Swarm Optimization), BF
(Bacteria Foraging), and hybrid system can have strong optimization
function for engineering fields.
In detailed description, this lecture describes
research background about immune network based intelligent algorithm,
PSO based intelligent algorithm, bacteria foraging based intelligent
algorithm, and the characteristic of novel algorithm fusioned by their
algorithm. This one also illustrates motivation and background that
these algorithms should be applied to in the industry's automatic
system.
Second, this lecture illustrates immune
algorithm and applied to various plant to investigate the
characteristics and possibility of application. As the detailed
description, immune algorithm will described by studied material to
investigate possibility of application to plant. It suggests condition
for disturbance rejection control in AVR of thermal power plant and
introduce first into tuning method of its controller.
In the conventional genetic algorithm, it takes a
long time to compute and could not include a variety of information of
plant because of using sequential computing methods. That is some
problem with making a artificial intelligence for optimization. In this
lecture, by means of introducing clonal selection of immune algorithm
into computing procedure, it will be showed advanced results. That is,
it can be calculated simultaneously necessary information, transfer
function, time constant, and etc., for plant operation condition.
Therefore, computing time is about 30% shorter than that of the
conventional genetic algorithm and 10.6% smaller in overshoot when it is
applied to controller.
This lecture will introduce parameter estimation
method by immune algorithm for obtaining model of induction motor. It
will suggest immune algorithm based induction motor parameter estimation
to obtain optimal value depending on load variation from these
parameters.
Also, this lecture will introduce about
intelligent system using GA-PSO. It will introduce Euclidean data
distance to obtain fast global optimization not local optimization by
means of using wide data and suggests novel hybrid system GA-PSO based
intelligent tuning method that genetic algorithm and PSO (Particle Swarm
Optimization) is fusioned.
Speaker:Prof. Irene Y.H. Gu
Professor, Signal Processing Group,
Dept. of Signals and Systems, Chalmers University of Technology,
Gothenburg, 41296, Sweden
Title:Domain-Shift Object Tracking: manifold learning and tracking of large-size video objects with out
Abstract:Many dynamic objects in videos
contain out-of-plane pose changes accompanied by other deformation and
long-term partial occlusions, and the size of objects could be large in
images. In such scenarios, visual tracking using video from a single
camera is challenging. It is desirable that trackers be performed on
some smooth manifolds in such scenarios. Stochastic modeling on
manifolds is also important for tracking robustness.
In this talk, domain-shift tracking and learning
on smooth manifolds are addressed. First, we review some basic concepts
of manifolds and some commonly-used manifold tracking methods. We then
present a nonlinear dynamic model on a smooth (e.g. Grassmann,
Riemannian) manifold, from which Bayesian formulae are built on the
manifold, rather than in a single vector space as in the conventional
cases. Based on the model, particle filters are employed on the
manifold. We also consider domain-shift online learning with occlusion
handling. While it is essential for learning dynamic objects including
deformable out-of-plane motion for reducing tracking drift, one also
needs to prevent the learning when changes are caused by other occluding
objects or clutter. We show some examples of such online learning
approaches. Finally, some demonstrations and evaluations from such a
domain-shift tracker are shown, along with comparisons of results to
several state-of-the-art methods.
Speaker:Prof. Patrick Gaydecki
Professor, Sensing, Imaging and Signal Processing Group,
School of Electrical and Electronic Engineering, University of Manchester Manchester M60 1QD,
United Kingdom
Title:Intuitive Real-Times Platform for Audio Signal Processing and Musical Instrument Response Emulation
Abstract: In recent years, the DSP group at the University of Manchester has developed a range of DSP platforms for real-time filtering and processing of acoustic signals. These include Signal Wizard 2.5, Signal Wizard 3 and Vsound. These incorporate processors operating at 100 million multiplication-accumulations per second (MMACs) for SW 2.5 and 600 MMACS for SW 3 and Vsound. SW 3 features six input and eight output analogue channels, digital input/output in the form of S/PDIF and a USB interface. For all devices, The software allows the user, with no knowledge of filter theory or programming, to design and run standard or completely arbitrary FIR, IIR and adaptive filters. Processing tasks are specified using the graphical icon based interface. In addition, the system has the capability to emulate in real-time linear system behavior such as sensors, instrument bodies, string vibrations, resonant spaces and electrical networks. Tests have confirmed a high degree of fidelity between the behavior of the physical system and its digitally emulated counterpart. In addition to the supplied software, the user may also program the system using a variety of commercial packages via the JTAG interface.
Speaker:Dr. Karlheinz Brandenburg
Professor, Institut fuer Medientechnik, TU, Ilmenau PF 100565, 98684 Ilmenau, Helmholtzplatz 2
Fraunhofer- Institut Digitale Medientechnologie Ehrenbergstr. 31, 98693, Ilmenau, Germany
Title:Audio and Acoustic Signal Processing: The quest for High Fidelity Continues
Abstract: The dream of high fidelity
continues since more than 100 years. In the last decades, signal
processing has contributed many new solutions and a vast amount of
additional knowledge to this field. These include simple solutions like
matrix multichannel systems, Audio coding which changed the world of
music distribution and listening habits active noise control, active
modification of room acoustics, Search and recommendation technologies
to find your favourite music and many more. So are there any problems
left to be solved? Among others, I see two main research areas: Music
Information Retrieval (MIR), helping us to find and organise music, or
teaching playing musical instruments and Immersive technologies for
movie theatres and eventually our homes, creating the illusion of being
at some other place For such systems we use our knowledge about hearing,
especially how ear and brain work together to form the sensation of
sound. However, our knowledge about hearing, about psychoacoustics is
still far from complete. In fact, just in the last few years we have
learned a lot about what we don�t know.
The talk will touch on a number of the subjects
above, explain some current work and its applications and finally talk
about open research questions regarding psychoacoustics and the
evaluation of audio quality.
Speaker:Patrizio Campisi Ph.D.
Professor, Section of Applied Electronics,
Department of Engineering, Universit� degli Studi Roma TRE,
Via Vito Volterra 62, 00146 Roma, Italy
Title:Biometrics and Neuroscience: a marriage possible?
Abstract:In the recent years, biometric recognition, that is the automated recognition of individuals based on their behavioral and biological characteristics, has emerged as a convenient and possibly secure method for user authentication. In this talk we infer about the feasibility of using brain signals as a distinctive characteristic for automatic user recognition. Despite the broad interest in clinical applications, the use of brain signals sensed by means of electroencephalogram (EEG) has been only recently investigated by the scientific community as a biometric characteristic. Nevertheless, brain signals present some peculiarities, not shared by the most commonly used biometrics, like face, iris, and fingerprints, concerning privacy compliance, robustness against spoofing, possibility to perform continuous identification, intrinsic liveness detection, and universality, which make the use of brain signals appealing. However, many questions remain open and need a deeper investigation. Therefore in this talk, taking a holistic approach, we speculate about issues such as the level of EEG stability in time for the same user, the user discriminability that EEG signals can guarantee, and the relationship of these characteristics with the different elements of the employed acquisition protocol such as the stimulus, the electrodes displacement and number, etc. A detailed overview and a comparative analysis of the state of the art approaches will be given. Finally, the most challenging research issues on the design of EEG based biometric systems is outlined.
Speaker:Prof. Philip James Wilkinson, Australia
President, International Union of Radio Science (URSI)
Member URSI/COSPAR Working Group on International Reference Ionosphere
Title:URSI - what is its role in the 21st century?
Abstract:The heart of URSI (The International Union of Radio Science) is radio science, an enabling science that permeates society and is central to all technology. The founding body of URSI met in Belgium, in 1914, and the first URSI General Assembly took place in Belgium, in 1922. URSI joined the IRC (International Research Council (1919-1931) in 1922 and in 1931 the IRC became ICSU (now The International Council for Science) making URSI a founding scientific Union of ICSU. How relevant is such an historic body as URSI one hundred years after it was formed? This address will not answer that question directly, nor the equivalent question in the title for this talk. Instead, some of the ingredients for future success will be put forward, which includes a selection of the new science URSI scientists engage in as well as the changes URSI will promote in coming years.
Speaker:Prof. Kazuya Kobayashi
Department of Electrical, Electronic, and Communication Engineering,
Chuo University, Tokyo, Japan President of the Japan National Committee of URSI
Title:Rigorous Radar Cross Section Analysis of a Finite Parallel-Plate Waveguide with Material Loading
Abstract:The analysis of
electromagnetic scattering by open-ended metallic waveguide cavities is
an important subject in the prediction and reduction of the radar cross
section (RCS) of a target. This problem serves as a simple model of duct
structures such as jet engine intakes of aircrafts and cracks occurring
on surfaces of general complicated bodies. Some of the diffraction
problems involving two- and three-dimensional cavities have been
analyzed thus far based on high-frequency techniques and numerical
methods. It appears, however, that the solutions due to these approaches
are not uniformly valid for arbitrary dimensions of the cavity.
Therefore it is desirable to overcome the drawbacks of the previous
works to obtain solutions which are uniformly valid in arbitrary cavity
dimensions. The Wiener-Hopf technique is known as a powerful, rigorous
approach for analyzing scattering and diffraction problems involving
canonical geometries. In this contribution, we shall consider a finite
parallel-plate waveguide with four-layer material loading as a geometry
that can form cavities, and analyze the plane wave diffraction
rigorously using the Wiener-Hopf technique. Both E and H polarizations
are considered.
Introducing the Fourier transform of the
scattered field and applying boundary conditions in the transform
domain, the problem is formulated in terms of the simultaneous
Wiener-Hopf equations. The Wiener-Hopf equations are solved via the
factorization and decomposition procedure leading to the exact solution.
However, this solution is formal since infinite series with unknown
coefficients and infinite branch-cut integrals with unknown integrands
are involved. For the infinite series with unknown coefficients, we
shall derive approximate expressions by taking into account the edge
condition. For the branch-cut integrals with unknown integrands, we
assume that the waveguide length is large compared with the wavelength
and apply a rigorous asymptotics. This procedure yields high-frequency
asymptotic expressions of the branch-cut integrals. Based on these
results, an approximate solution of the Wiener-Hopf equations, efficient
for numerical computation, is explicitly derived, which involves a
numerical solution of appropriate matrix equations. The scattered field
in the real space is evaluated by taking the inverse Fourier transform
and applying the saddle point method. Representative numerical examples
of the RCS are shown for various physical parameters, and the far field
scattering characteristics of the waveguide are discussed in detail. The
results presented here are valid over a broad frequency range and can
be used as a reference solution for validating other analysis methods
such as high-frequency techniques and numerical methods.
Speaker:Prof. Dr. Sneh Anand
Centre for Biomedical Engineering
Indian Institute of Technology Delhi
Title:Intelligent Real Time Biological Signal Processor
Abstract: Human brain is a unique Ideal
intelligent signal processor in the world. In the human brain, the
salience activities operate at supernatural level. The
mother board and the CPU are intervened at subcellular and
physiological levels.
The delusion to think that everything outside is a volume of space �not you and outside of you.� The fact is that,
sense of presence that is you �are everywhere.� The main reason that you have more awareness of being in a
body is simply because of the multi-sensory intelligence of the body commands. We have the illusion that our
human bodies are solid, but they are over 99.99% empty space. Input signals operate at emotional, environment
and attention levels, besides the multiple physical electromagnetic chemical mechanical and microbiological
structural changes.
Living system is complex that intelligently coordinates the communication channels at atomic and subatomic
level between body brain and mind, Natural environment plays very vital role in programming the millions upon
millions of processes occurring in the body at quantum physical level. The human system transforms itself.
Ancient physicians were physicists that are adapted in modern medicine and developed medical technologies.
Biological sensors operate at algorithms that are different in all species however the human brain networks are
most complex self-programmed processors.
Brno University of Technology, BRNO, Czech Republic, European Union
Charles Sturt University, Australiaturt University, Australia
.